Skip to main content
Journal of the American Medical Informatics Association: JAMIA logoLink to Journal of the American Medical Informatics Association: JAMIA
. 2023 Jun 5;30(9):1552–1557. doi: 10.1093/jamia/ocad094

A snapshot of artificial intelligence research 2019–2021: is it replacing or assisting physicians?

Mahmoud Elmahdy 1,✉,#, Ronnie Sebro 2,3,4,5,#
PMCID: PMC10436151  PMID: 37279884

Abstract

Artificial intelligence (AI) has the potential to be a disruptive technology in healthcare. Recently, there is increased speculation that AI may be used to replace healthcare providers in the future. To answer this question, we reviewed over 21 000 articles published in medical specialty journals between 2019 and 2021 to evaluate whether these AI models were intended to assist or replace healthcare providers. We also evaluated whether all Food and Drug Administration (FDA)-approved AI models were used to assist or replace healthcare providers. We find that most AI models published in this time period were intended to assist rather than replace healthcare providers, and that most of the published AI models performed tasks that could not be done by healthcare providers.

Keywords: artificial intelligence, disruptive technology

INTRODUCTION

Artificial intelligence (AI) algorithms are being increasingly used in biomedical research.1 AI has shown great promise toward changing the way healthcare is administered by increasing healthcare providers’ efficiency,2,3 decreasing errors,3,4 improving healthcare access,5,6 increasing diagnostic accuracy,6–8 and improving healthcare quality.9 AI has all the features to become a disruptive innovation.10 Disruptive innovations provide novel solutions by addressing needs that have never been met before, leading to the creation of a new market. AI algorithms have several advantages over human providers, including not being susceptible to human limitations, such as fatigue, illness, and death.11 All of the advantages provided by AI lead to improved healthcare. For example, wearable devices can identify patients with irregular heart rhythms and notify them so that they can be further evaluated by healthcare providers. This leads to improved detection of an abnormality (irregular heart rate) that might go unrecognized.12

One major concern is that in the future AI will replace healthcare providers in performing tasks related to healthcare.11 The aims of this article are (1) to investigate how AI algorithms in recently published articles are proposed to be used in clinical practice—whether they are intended to assist or replace healthcare providers, and (2) to investigate whether Artificial Intelligence and Machine Learning (AI/ML)-enabled medical devices approved by Food and Drug Administration (FDA) are licensed to assist or replace healthcare providers. The results of these analyses will help answer the question as to whether AI will assist or replace healthcare providers in the future.

DATA AND METHODS

We evaluated all original research articles published between January 1, 2019, and December 31, 2021, in the 3 non-review journals with the highest impact factor (IF) in Clarivate citation report12 within each of 16 medical specialties (Supplementary Data S1). All review, case-report, and perspective papers were excluded. Then, each research article from each journal was reviewed by 2 researchers.

We excluded all articles that did not use AI and models that used classical statistical techniques. AI included all deep learning models using convolutional neural networks, vision transformers, or other similar architectures; ML models included all regression methods (least absolute shrinkage and selection operator [LASSO] regression, ridge regression, elastic net regression, polynomial regression, kernel regression) but excluded univariate, multivariable, and multivariate logistic regression and linear regression; all classification methods (support vector machines, random forest classifiers, Naïve Bayes classifiers, gradient boosted trees, decision trees, extreme gradient boosted machines, principal component analysis, multidimensional scaling, artificial neural networks); and all clustering methods including k-nearest neighbors, and hierarchical clustering. For each article, we recorded whether AI was used to (1) assist healthcare providers or (2) replace healthcare providers (Figure 1). We also recorded whether AI was used to do a task healthcare providers could do/already performed.

Figure 1.

Figure 1.

Flow chart of the study design.

We defined an article as using AI to assist healthcare providers if (1) the AI model was used to do something healthcare providers ordinarily would not be able to do, for example, prognostic modeling of disease progression or predicting an event in the future or if (2) the article stated that the AI model was less accurate or had lower area under the curve (AUC) than the healthcare providers when both performances were compared.

We defined an article as using AI to replace healthcare providers if (1) the article reported that the AI model had similar or higher accuracy or AUC as the healthcare providers for doing a task or (2) if the article stated that an untrained or less trained individual using the AI models had similar or higher accuracy/(AUC) than the healthcare providers for doing a task.

Two researchers independently determined whether the authors of the research article explicitly stated that the aim of the research was to replace or assist healthcare providers. Discrepancies were resolved with consensus after discussion. The same 2 researchers also independently evaluated whether the conclusion of the research stated that AI could be used to replace or assist healthcare providers. Again, discrepancies were resolved in consensus after discussion.

AI/ML-Enabled Medical Devices approved by FDA

We evaluated the AI/ML-enabled medical devices approved by the FDA from November 8, 1995, through December 31, 2022.13 We reviewed the published summary to evaluate Whether the license was approved to assist or to replace healthcare providers, and we recorded which medical specialty panel was responsible for evaluating the AI/ML as listed on the FDA site. The data were reviewed independently by 2 physician researchers and discrepancies were resolved in consensus.

Statistical analysis

We calculated the number and proportion of the total AI manuscripts by medical specialty. The proportions of papers using AI models to replace or assist healthcare providers were recorded. Statistics were performed using Microsoft Excel v2207. The chi-squared test was used to evaluate whether the distribution of AI models used to replace or assist healthcare providers varied by specialty. All test statistics were 2-sided, and an a priori Type I error rate of 0.05 was chosen so that P-values <0.05 were considered statistically significant.

RESULTS

We reviewed a total of 21 803 research articles from 48 non-review journals. Approximately 525 (2.4%) of the articles were original research articles that utilized AI. Radiology 324 (17.8%), Pathology 47 (3.7%), Ophthalmology 49 (2.7%), and Dermatology 14 (0.5%) were the fields with the highest proportion of AI articles (Figure 2). Almost all (n = 513; 97.7%) AI articles used algorithms that assisted healthcare providers, while only 2.3% (n = 12) AI articles created algorithms to replace healthcare providers. The specialties with the highest proportion of AI articles used to assist healthcare providers were Emergency Medicine 14 (100%), Obstetrics and Gynecology 13 (100%), Urology 11 (100%), Orthopedics 10 (100.0%), and Neurology 9 (100.0%) (Table 1).

Figure 2.

Figure 2.

Original research articles in Artificial Intelligence (AI).

Table 1.

Artificial intelligence articles categorized as being used to replace or assist healthcare providers by specialty

Specialty Total articles Assista Replaceb Assists and performs a task currently done by a healthcare provider
Anesthesiology
  • 7 (100.0%)

  • [1.3%]

  • 6 (85.7%)

  • [1.2%]

  • 1 (14.3%)

  • [8.3%]

  • 0 (0.0%)

  • [0.0%]

Dermatology
  • 14 (100.0%)

  • [2.7%]

  • 9 (64.3%)

  • [1.8%]

  • 5 (35.7%)

  • [41.7%]

  • 3 (33.3%)

  • [4.5%]

Emergency Medicine
  • 14 (100.0%)

  • [2.7%]

  • 14 (100.0%)

  • [2.7%]

  • 0 (0.0%)

  • [0.0%]

  • 0 (0.0 %)

  • [0.0%]

Internal Medicine
  • 9 (100.0%)

  • [1.7%]

  • 8 (88.9%)

  • [1.6%]

  • 1 (11.1%)

  • [8.3%]

  • 0 (0.0%)

  • [0.0%]

Neurology
  • 9 (100.0%)

  • [1.7%]

  • 9 (100.0%)

  • [1.8%]

  • 0 (0.0%)

  • [0.0%]

  • 0 (0.0%)

  • [0.0%]

Obstetrics and Gynaecology
  • 13 (100.0%)

  • [2.5%]

  • 13 (10.0%)

  • [2.5%]

  • 0 (0.0%)

  • [0.0%]

  • 3 (23.1%)

  • [4.5%]

Ophthalmology
  • 49 (100.0%)

  • [9.3%]

  • 48 (98.0%)

  • [9.4%]

  • 1 (2.0%)

  • [8.3%]

  • 20 (41.6%)

  • [29.9%]

Orthopedics
  • 10 (100.0%)

  • [1.9%]

  • 10 (100.0%)

  • [1.9%]

  • 0 (0.0%)

  • [0.0%]

  • 3 (30%)

  • [4.5%]

Otorhinolaryngology
  • 7 (100.0%)

  • [1.3%]

  • 7 (100.0%)

  • [1.4%]

  • 0 (0.0%)

  • [0.0%]

  • 1 (13.0%)

  • [1.5%]

Pathology
  • 47 (100.0%)

  • [9.0%]

  • 46 (97.9%)

  • [9.0%]

  • 1 (2.1%)

  • [8.3%]

  • 6 (12.8%)

  • [9.0%]

Pediatrics
  • 3 (100.0%)

  • [0.6%]

  • 3 (100.0%)

  • [0.6%]

  • 0 (0.0%)

  • [0.0%]

  • 1 (33.3%)

  • [1.5%]

Psychiatry
  • 0 (0.0%)

  • [0.0%]

  • 0 (0.0%)

  • [0.0%]

  • 0 (0.0%)

  • [0.0%]

  • 0 (0.0%)

  • [0.0%]

Radiology
  • 324 (100.0%)

  • [61.7%]

  • 321 (99.1%)

  • [62.6%]

  • 3 (0.9%)

  • [25.0%]

  • 28 (8.7%)

  • [41.8%]

Rehabilitation
  • 0 (0.0%)

  • [0.0%]

  • 0 (0.0%)

  • [0.0%]

  • 0 (0.0%)

  • [0.0%]

  • 0 (0.0%)

  • [0.0%]

Surgery
  • 8 (100.0%)

  • [1.5%]

  • 8 (100.0%)

  • [1.6%]

  • 0 (0.0%)

  • [0.0%]

  • 0 (0.0%)

  • [0.0%]

Urology
  • 11 (100.0%)

  • [2.1%]

  • 11 (100.0%)

  • [2.1%]

  • 0 (0.0%)

  • [0.0%]

  • 2 (18.2%)

  • [3.0%]

Total 525 (100.0%) 513 (97.7%) 12 (2.3%) 67 (13.1%)
a

Assist: (1) the AI model was used to do something healthcare providers ordinarily would not be able to do, for example, prognostic modeling of disease progression or predicting an event in the future or if (2) the article stated that the AI model was less accurate or had lower area under the curve (AUC) than the healthcare providers when both performances were compared.

b

Replace: (1) the article reported that the AI model had similar or higher accuracy/area under the curve (AUC) as the healthcare providers for doing a task or (2) if the article stated that an untrained or less trained individual using the AI models had similar or higher accuracy/(AUC) than the healthcare providers for doing a task.

Parentheses (): represent row percentage in each specialty.

Square brackets []: represent column percentage in each column.

The specialties with the highest proportion of AI articles that performed a job that a healthcare provider currently does were Ophthalmology 20 (41.6%), Dermatology 3 (33.3%), Obstetrics and Gynecology 3 (23.1%), and Urology 2 (18.2%) (Table 1).

The specialties with the highest proportion of AI articles used to replace healthcare providers were Dermatology 5 (35.7%), Anesthesiology 1 (14.3%), Internal Medicine 1 (11.1%), Pathology 1 (2.1%), Ophthalmology 1 (2.0%), and Radiology 3 (0.9%) (Table 1). There was a statistically significant difference between the proportions of AI models used to “replace” and “assist” among medical specialties (P < .001). We also found a statistically significant difference between the proportions of AI models that assist and perform tasks that are currently done by a healthcare professional between medical specialties (P < .001).

AI/ML-enabled medical devices approved by FDA

All AI/ML-enabled medical devices approved by the FDA between November 8, 1995, and December 31, 2022, are licensed with the indication to assist rather than to replace healthcare providers (Supplementary Data S2). Most of the FDA-approved AI/ML algorithms were in Radiology 392 (75.2%), Internal Medicine 81 (15.5%), and Neurology 14 (2.7%) (Table 2).

Table 2.

Number of (Artificial Intelligence and Machine Learning [AI/ML]-enabled medical devices) approved by the Food and Drug Administration (FDA) between 1995 and 2022 by specialty

Specialty (panel)a Number (%)
Anesthesiology 4 (0.8)
Internal Medicineb 81 (15.5)
Neurology 14 (2.7)
Obstetrics and Gynaecology 1 (0.2)
Ophthalmic 7 (1.3)
Orthopedic 1 (0.2)
Pathology 4 (0.8)
Radiology 392 (75.2)
Surgery 5 (1.0)
Otherc 12 (2.3)
Total 521 (100.0)
a

Based on panel listed on FDA site.

b

Includes: Cardiovascular, Hematology, Gastroenterology & Urology, Gastroenterology-Urology, and General hospital panels.

c

Includes: Clinical Chemistry, Dental, and Microbiology panels.

These findings show that AI models assist rather than replace healthcare providers. The current research is directed to help healthcare providers diagnose patients,7,14 and these findings are reflected in the AI/ML-enabled medical devices approved by FDA, since the approved devices are licensed to assist healthcare providers in doing their job.

DISCUSSION

We found that most AI articles published in the top 3 non-review journals in each of 16 medical specialties between 2019 and 2021 were AI models intended to assist rather than replace healthcare providers. We found that ∼87% of the AI models that assisted healthcare providers performed tasks that the healthcare providers could not currently do. Radiology and Pathology were the fields with the highest proportion of AI original research articles. Dermatology and Anesthesiology were the fields with the highest proportion of AI models used to replace healthcare providers. We found significant differences in the proportions of AI model used to replace or assist healthcare providers between specialties. Finally, we noted that most AI models used to assist healthcare providers did tasks that could not be done by a healthcare provider.

There are several factors that may affect the relative proportion of “replace” versus “assist” AI/ML methodologies. These factors include better imaging methodologies or devices, more easily and readily available real-world data, more interest in clinicians having complex problems solved, and more interest in AI researchers in working in a particular medical specialty. It is difficult to predict how these factors have changed in the past, and how these factors will change in future.

There was a significant difference in the distribution of AI models used to replace healthcare providers between medical specialties. The highest proportion of AI models used to replace healthcare providers was seen in Dermatology where 5 of 14 (35.7%) of the AI models appeared to be intended to replace healthcare providers. Dermatologists often evaluate skin lesions, and make an assessment or diagnosis based on their prior experiences and training.15 These skin lesions are often biopsied, and the true gold standard histopathological diagnosis is known. Here, the ground truth previously used in clinical practice which is the dermatologists’ assessment or diagnosis is in danger of being replaced by AI models that have better performance and are more accurate than the dermatologists’ assessment or diagnosis. This suggests that tasks performed by a healthcare provider that create a proxy for a histopathological or immunoassay gold standard are amenable to being replaced by AI models.

There are multiple examples of AI algorithms created to assist healthcare providers. For example, postpartum hemorrhage is a leading cause of maternal mortality and morbidly across the world. Postpartum hemorrhage is defined as the cumulative blood loss of ≥1000 mL or blood loss accompanied by signs or symptoms of hypovolemia within 24 h after birth. Data from a retrospective cohort of women delivering at ≥23 weeks gestation were used to build an ML predictive model. This model identified women at high risk for postpartum hemorrhage or labor admission with high discriminative power (C statistic: 0.92, 95% confidence interval [CI], 0.91–0.92). Identification of these women would allow more rapid diagnosis and intervention, resulting in improved outcomes, and better resource allocation.14

Another example is an AI model assisting healthcare providers to deal with intraoperative hypotensive episodes. Hypotensive episodes occur when a patient has a mean arterial blood pressure <65 mm Hg intraoperatively.16 Hypotensive episodes may be the result of the use of anesthetic drugs, comorbidities, or intraoperative blood loss.16 These episodes are associated with complications such as renal insufficiency, myocardial injury, and increased mortality.16 The hypotension prediction (HYPE) trial clinical trial compared an ML-derived model to the standard care and showed that the ML-derived system decreased the average duration of hypotension.16 The ML-derived system decreased intraoperative hypotension duration by a median difference of 16.7 min (95% CI, 7.7–31.0 min; P < .001).16

There are also examples of AI algorithms created to potentially replace healthcare providers. One example is an AI model that can detect and classify pulmonary nodules as benign or malignant. Lung cancer is a leading cause of cancer incidence and mortality, with an estimated 1.8 million deaths per year globally.17 Lung cancers present as lung nodules which can be difficult for a radiologist to detect from chest radiography. Also, lung nodules could be the first radiological sign of lung cancer. AI model detecting pulmonary nodules would allow more rapid intervention.3 The model significantly performed better than 15 of 18 physicians (P < .05) in detecting the nodules.3 Also, the model significantly performed better than 11 of the physicians in classification of the nodules as benign or malignant.3

Another example is an AI model which was developed to automate intubation. Intubation is commonly performed by anesthetists to secure a patient’s airway. This requires visualization, correct identification of anatomical structures, and the advancement of the endotracheal tube into the trachea through the upper airway.18 The AI model helped 100% of the untrained subjects (7 medical students) to effectively intubate a manikin. The median time to intubate the manikin in anesthetists using AI was (15.0s, interquartile range [IQR] 12.9–18.1), while untrained subjects were (15.9s, IQR 13.2–22.8s). This shows that AI models can assist less experienced subjects to intubate patients accurately and rapidly.18 This would be potentially helpful in emergency situations in underserved areas with limited access to anesthetists.18

AI was predominantly being used to do tasks that cannot currently be performed by a healthcare provider. An example can be obtained from the field of Radiology. Prior to the rapid polymerase chain reaction test, chest radiographs were being utilized for triage of patients with suspected coronavirus disease-19 (COVID).19,20 Deep learning analysis of these triage chest radiographs could be used to reasonably accurately predict whether a patient had COVID, whether the patient should be admitted, and whether the patient should be admitted to the intensive care unit.19 These clinical endpoints are all clinically significant because they affect patient prognosis. However, a radiologist cannot currently accurately make this prediction. This is an instance of AI helping a physician do a task that the physician could not do. We found that over the studied time period, most AI models were created to do tasks that healthcare providers could not do, and as a result assisted rather than replaced healthcare providers.

Limitations

This study has a few limitations. We reviewed over 21 000 articles in the highest impact journals; however, we are unable to review every AI article ever published. To our knowledge, there is no systemic review paper that addresses this question before across all medical specialties. Systemic review without predefined limit would not be a feasible or realistic option. Since running a simple search of Scopus database for machine learning OR artificial intelligence terms identifies more than 1 500 000 manuscripts. We only evaluated original research AI articles that were published in the highest IF journals in each of 16 fields between January 1, 2019, and December 31, 2021. The proportions of articles that used AI algorithms to assist healthcare providers and the proportions of articles that used AI algorithms to replace healthcare providers may differ in journals with lower IFs or were not in the specialties described by Clarivate. We hypothesize that the articles published in the higher impact journals have a larger readership and may better approximate the global trends that we could expect in future. Articles published more recently, or articles published over a decade ago, may show a different pattern. We only evaluated articles published over a 3-year interval; however, we cannot prove that the trend we noted would persist in 2023 and beyond. This is a quantitative snapshot of AI healthcare models in the 2019–2021 timeframe and antedates the arrival of AI Large Language Models (LLMs) like ChatGPT-4, Bard, and others that will certainly assist care providers but have the potential to replace care providers in certain roles in the future. While this analysis can serve as a baseline, a similar review in 5  years in the future may have very different results.

We only evaluated AI/ML-enabled medical devices approved by the FDA although there are other regulatory bodies in different countries around the world. Not all AI models published in the research literature become commercially used in healthcare. Therefore, the AI models that are published may not be representative of the AI models that are commercially available and approved by the FDA. AI/ML models in the form of clinical decision support systems (CDSS) have been around since the 1970s20; however, back then, there were significant ethical and legal issues raised around the use of computers in medicine. In addition, there were concerns about physician autonomy and legal concerns about who would be at fault when using the recommendation of a CDSS.20 These autonomous CDSS were rejected in favor of diagnostic assistant CDSS systems that were used under the supervision of a physician or other healthcare providers. This behavior may account for why we noted that most FDA-approved devices were approved to be used under the supervision of a physician or other healthcare provider.

There has also been patient resistance to the use of medical AI.21 A prior study showed that patients have been shown to prefer a human provider to AI even if it meant that there would be an increased risk of inaccurate diagnosis or surgical complication.21 One hypothesis is that patients believe that AI providers are standardized and therefore inflexible and suited to treat the average patient, but not suited to treat the unique individual patient. However, this study pre-dated the use of LLMs. A more recent study showed that the ChatGPT LLM provided more empathetic responses than physicians when posed with clinical questions from an online public social media forum.22 This finding may change how AI models are used in medicine in the future.

An additional limitation is that AI algorithms that assist, or to an extent replace, healthcare providers of various kinds may be used in current practice, but not reflected in either the literature or FDA data. We acknowledge the existence of these algorithms, but there is no way for us to evaluate these.

Finally, this analysis pertains to physician healthcare providers because we focused on medical journals. Our review did not address the extent to which AI has replaced non-physicians engaged in other activities essential to healthcare delivery, such as nurses, nurse practitioners, physician assistants, mid-wives, social workers, psychologists, dentists, podiatrists, optometrists, and chiropractors.

CONCLUSION

Most of the AI research in high impact journals and all the AI/ML-enabled medical devices approved by FDA assist rather than replace healthcare providers over the time period studied. AI algorithms have allowed healthcare providers to create more sophisticated models and to extract more information from the electronic medical record to improve healthcare over the time period studied.

Supplementary Material

ocad094_Supplementary_Data

Contributor Information

Mahmoud Elmahdy, Department of Radiology, Mayo Clinic, Jacksonville, Florida, USA.

Ronnie Sebro, Department of Radiology, Mayo Clinic, Jacksonville, Florida, USA; Center for Augmented Intelligence, Mayo Clinic, Jacksonville, Florida, USA; Department of Orthopedic Surgery, Mayo Clinic, Jacksonville, Florida, USA; Department of Biostatistics, Center for Quantitative Health Sciences, Jacksonville, Florida, USA.

FUNDING

This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.

AUTHOR CONTRIBUTIONS

All authors contributed significantly to this work. RS conceptualized the study. ME and RS searched for and retrieved relevant articles, screened, extracted data from included articles, interpreted the data, drafted the article, and made substantive revisions to the article. All authors gave final approval of and accepted accountability for the article.

SUPPLEMENTARY MATERIAL

Supplementary material is available at Journal of the American Medical Informatics Association online.

CONFLICT OF INTEREST STATEMENT

None declared.

DATA AVAILABILITY

All data are incorporated into the article and its Supplementary Material.

REFERENCES

  • 1. Rajpurkar P, Chen E, Banerjee O, Topol EJ.  AI in health and medicine. Nat Med  2022; 28 (1): 31–8. [DOI] [PubMed] [Google Scholar]
  • 2. Elmahdy M, Sebro R.  Beyond the AJR: comparison of artificial intelligence candidate and radiologists on mock examinations for the fellow of Royal College of Radiology Part B. AJR Am J Roentgenol  2023; doi: 10.2214/ajr.23.29155. [DOI] [PubMed] [Google Scholar]
  • 3. Nam JG, Park S, Hwang EJ, et al.  Development and validation of deep learning-based automatic detection algorithm for malignant pulmonary nodules on chest radiographs. Radiology  2019; 290 (1): 218–28. [DOI] [PubMed] [Google Scholar]
  • 4. Oksuz I, Ruijsink B, Puyol-Antón E, et al.  Automatic CNN-based detection of cardiac MR motion artefacts using k-space data augmentation and curriculum learning. Med Image Anal  2019; 55: 136–47. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. MacLellan AN, Price EL, Publicover-Brouwer P, et al.  The use of noninvasive imaging techniques in the diagnosis of melanoma: a prospective diagnostic accuracy study. J Am Acad Dermatol  2021; 85 (2): 353–9. [DOI] [PubMed] [Google Scholar]
  • 6. Heydon P, Egan C, Bolter L, et al.  Prospective evaluation of an artificial intelligence-enabled algorithm for automated diabetic retinopathy screening of 30 000 patients. Br J Ophthalmol  2021; 105 (5): 723–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Jackson CR, Sriharan A, Vaickus LJ.  A machine learning algorithm for simulating immunohistochemistry: development of SOX10 virtual IHC and evaluation on primarily melanocytic neoplasms. Mod Pathol  2020; 33 (9): 1638–48. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Bulten W, Balkenhol M, Belinga JA, et al. ; ISUP Pathology Imagebase Expert Panel. Artificial intelligence assistance significantly improves Gleason grading of prostate biopsies by pathologists. Mod Pathol  2021; 34 (3): 660–71. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Skrede OJ, De Raedt S, Kleppe A, et al.  Deep learning for prediction of colorectal cancer outcome: a discovery and validation study. Lancet  2020; 395 (10221): 350–60. [DOI] [PubMed] [Google Scholar]
  • 10. Christensen CM, Raynor ME, McDonald R. What Is Disruptive Innovation. https://hbr.org/2015/12/what-is-disruptive-innovation. Accessed December 30, 2022.
  • 11. Ahuja AS.  The impact of artificial intelligence in medicine on the future role of the physician. PeerJ  2019; 7: e7702. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Perez MV, Mahaffey KW, Hedlin H, et al. ; Apple Heart Study Investigators. Large-scale assessment of a Smartwatch to identify atrial fibrillation. N Engl J Med  2019; 381 (20): 1909–17. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. FDA. Secondary. https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices. Accessed December 30, 2022.
  • 14. Venkatesh KK, Strauss RA, Grotegut CA, et al.  Machine learning and statistical models to predict postpartum hemorrhage. Obstet Gynecol  2020; 135 (4): 935–44. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Bos JD, Schram ME, Mekkes JR.  Dermatologists are essential for quality of care in the general practice of medicine. Actas Dermosifiliogr  2009; 100 (Suppl 1): 101–5. [DOI] [PubMed] [Google Scholar]
  • 16. Wijnberge M, Geerts BF, Hol L, et al.  Effect of a machine learning-derived early warning system for intraoperative hypotension vs standard care on depth and duration of intraoperative hypotension during elective noncardiac surgery: the HYPE randomized clinical trial. JAMA  2020; 323 (11): 1052–60. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Thandra KC, Barsouk A, Saginala K, Aluru JS, Barsouk A.  Epidemiology of lung cancer. Contemp Oncol (Pozn)  2021; 25 (1): 45–52. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Biro P, Hofmann P, Gage D, et al.  Automated tracheal intubation in an airway manikin using a robotic endoscope: a proof of concept study. Anaesthesia  2020; 75 (7): 881–6. [DOI] [PubMed] [Google Scholar]
  • 19. Kim CK, Choi JW, Jiao Z, et al.  An automated COVID-19 triage pipeline using artificial intelligence based on chest radiographs and clinical data. NPJ Digit Med  2022; 5 (1): 5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. Elmahdy M, Sebro R.  Radiomics analysis in medical imaging research. J Med Radiat Sci  2023; 70 (1): 3–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. Sutton RT, Pincock D, Baumgart DC, Sadowski DC, Fedorak RN, Kroeker KI.  An overview of clinical decision support systems: benefits, risks, and strategies for success. NPJ Digit Med  2020; 3: 17. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Longoni C, Bonezzi A, Morewedge CK.  Resistance to medical artificial intelligence. J Consum Res  2019; 46 (4): 629–50. [Google Scholar]
  • 22. Ayers JW, Poliak A, Dredze M, et al.  Comparing physician and artificial intelligence chatbot responses to patient questions posted to a public social media forum. JAMA Intern Med  2023; e231838. doi: 10.1001/jamainternmed.2023.1838. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

ocad094_Supplementary_Data

Data Availability Statement

All data are incorporated into the article and its Supplementary Material.


Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of Oxford University Press

RESOURCES