Abstract
Artificial intelligence (AI) is increasingly being utilized to augment the practice of emergency medicine due to rapid technological advances and breakthroughs. AI applications have been used to enhance triage systems, predict disease-specific risk, estimate staffing needs, forecast patient decompensation, and interpret imaging findings in the emergency department setting. This article aims to help readers without formal training become informed end-users of AI in emergency medicine. The authors will briefly discuss the principles and key terminology of AI, the reasons for its rising popularity, its potential applications in the emergency department setting, and its limitations. Additionally, resources for further self-studying will also be provided.
Keywords: artificial intelligence, machine learning, neural networks, natural language processing, informatics, education
1. Introduction
Artificial intelligence (AI) has been utilized for decades, and most people are familiar with some form of AI, such as grammar and spell-checking applications, voice recognition software, or automatic electrocardiogram interpretation. Recently, there has been an increasing interest in AI, spurred forward by the advent of conversational AI programs, such as ChatGPT, and AI’s potential to revolutionize health care.1, 2, 3 The goal of this article is to familiarize readers without formal AI training with the principles of AI, the reasons for its rising popularity, and its potential applications in the emergency department (ED) setting, as well as its limitations.
2. Decoding the Jargon
AI is “intelligence” exhibited by computers or machines. Intelligence can be thought of as the ability to learn and apply appropriate techniques to solve complex problems and achieve goals.4 Intelligence is a spectrum, with the level of intelligence predicated on the complexity of the task(s), the range of settings over which the agent can operate, and the variety of goals that it can accomplish. A robot used to perform a set task, such as picking up and placing an object, can be accurate but is not considered to be highly intelligent as the robot will perform this task in the same manner every time.
What generally helps achieve higher levels of AI is a domain of approaches called machine learning (ML). ML is defined as the use of computer algorithms and models to identify patterns in data and make predictions based on those patterns to achieve specific goals.5 ML prediction models are one type of model with clinical applications to the stages of a patient’s progression through the ED. These models include AI models and more conventional non-AI statistical models (Fig 1). ML approaches can be subclassified in several ways. Conventionally, 2 broad categories include supervised and unsupervised learning. In supervised learning, the computer is given a set of training data that includes the “correct” answers. A model is created to make predictions based on patterns in these data. Basic (not highly intelligent) supervised ML approaches include linear and logistic regression.6 In unsupervised learning, data alone is provided, without annotations or predefined outcomes, and the computer attempts to identify latent groupings within the data.7
Figure 1.
Clinical applications and methodology of both artificial intelligence (AI) and non-AI models in the context of the emergency department (ED) workflow and research. ∗Most methods listed in this figure are not strictly AI but can be used with various tools to create AI applications. Inherent in each category is a likelihood of being used in AI applications except for neural networks, which are, by definition, AI. Figure 3 displays the overlapping nature of many of these domains. NLP, natural language processing.
A neural network is a form of supervised ML loosely based on a simplified model of biological neural organization. These networks are often represented in a graphical form as nodes arranged in stacks or layers, which are interconnected through multiple lines to other layers, often with varying numbers of nodes. The first layer is termed the input layer; this layer represents model inputs that have been transcoded into numeric representations (eg, words are represented as numbers). The final layer is known as the output layer; at this stage, numeric results can be provided as is or converted back to the text, image, or audio data they represent. Hidden layers are those layers in between the input and output layers. Computationally, the numeric values of nodes from one layer are fed to the next after applying a weighting that is represented on their connecting arrow. If a node has multiple inputs, these are combined in some manner, often through summation. In essence, the weights reflect how much a preceding node affects the respective subsequent node. Weights are determined during the training process (Fig 2).
Figure 2.
Neural network with an input layer, hidden layers that analyze the output from the previous layer, and a final output layer.
Deep learning is a term used for neural networks with 3 or more layers.5 Because of the exponential number of connections when adding layers, deep learning requires large amounts of computing resources but excels at finding complex or nuanced relationships in data. The complexity of the network can also make it difficult to understand why a model produces a particular answer, contributing to ML's reputation for being a “black box” and what is sometimes called the “explainability crisis” in which even its designers are unable to determine why the ML algorithm arrived at the result.8
Natural language processing (NLP) is another subfield of AI that teaches computers to interpret language as it is naturally spoken or written by humans. NLP may be as simple as determining how often a word appears in a text or as complex as using ML to respond to prompts, as exemplified by ChatGPT or Gemini.9,10 NLP can be used for tasks such as predicting patient disposition from a nursing triage note.11
The utilization of AI can be broadly divided into 2 categories: analysis and generation. Analytic AI focuses on analyzing data to identify patterns and correlations. Generative AI uses a combination of analytic techniques to create a model that can then be used to create written text and images. Large language models (LLMs), which are a form of neural networks, are the underlying technology used by ChatGPT, Gemini, Microsoft 365 Copilot, and other similar products. LLMs use transformer architecture, which is an ML framework that employs attention mechanisms to determine relationships between data. Transformer architecture processes data in parallel, making LLMs more efficient than models that process data in a sequential manner.12 LLMs are trained on extremely large datasets of text utilizing significant computation resources, with the goal of repeatedly predicting the next word after the prompt to produce generated text. LLMs predict the next word based on patterns in training data, which can lead to generating plausible but incorrect information. This happens because training data include inaccuracies and biases, which the model cannot correct in real time. In addition, the model identifies patterns, not truths, and can misinterpret prompts. AI can also generate responses that contain false information presented as fact, which has been termed “hallucinations.” An example of an AI hallucination is the generation of references with author names and titles that appear correct but are nonexistent.13
Similarly, text-to-image generation is achieved by services like DALL-E, Midjourney, and Stable Diffusion using a specific type of neural network called a generative adversarial network. Generative AI has raised a number of new ethical and legal questions, especially regarding what text and images they are trained on and how their output fits into the current copyright legal framework.
Figure 3 demonstrates the relationship between AI, ML, NLP, and related domains. This conceptual diagram departs from some common portrayals of AI as an overarching category with ML as a subset. Instead, it emphasizes that AI and ML are distinct fields with overlapping areas. This perspective better captures how much of ML – including statistical and algorithmic methods like regression, clustering, or basic decision trees – focuses on pattern identification and prediction without inherently exhibiting the cognitive or adaptive qualities associated with AI. At the same time, this approach acknowledges that AI includes techniques beyond ML, such as rule-based systems and symbolic reasoning, which do not rely on data-driven learning. The intersection between AI and ML represents the shared space where ML is employed to create systems that mimic or simulate human-like intelligence, as seen in many leading popular approaches to complex tasks, such as neural networks for NLP or reinforcement learning in computer vision. Thus, although some applications of ML contribute to AI, ML itself is broader and not always aimed at “intelligent” behavior, just as AI is broader and not solely dependent on data-driven learning methods.
Figure 3.
Simplified model of conceptual overlap between commonly employed machine learning and artificial intelligence approaches. CNN, convoluted neural network; GAN, generative adversarial network; LLM, large language model; NLP, natural language processing.
3. The Rise of AI and its Applications in Emergency Medicine
Although AI is currently making headlines in the medical and lay press, it has been around for decades. AI’s rapid rise in prominence has been built on recent advancements in computing power, massive datasets, and breakthroughs in algorithms. These advancements allow for the development of AI systems that can perform complex tasks, such as NLP, image recognition, or computer vision, and decision making at levels that rival or surpass human capabilities.14, 15, 16, 17 This rapid expansion of computational ability has pushed AI into new areas of practical use. It is now possible for clinicians and the general public to easily access and experiment with AI like ChatGPT.
With all of the excitement about AI in medicine, it is important to understand how it could potentially benefit the frontline emergency physician. Several of AI’s potential applications in the ED lie in AI’s ability to take over manual tasks such as documentation support ranging from speech recognition to drafting clinical notes, as well as analyzing vast amounts of real-time data and filling in knowledge gaps.18 AI can also be used to guide medical diagnosis, create real-time risk analyses, aid in the interpretation of images, and improve how physicians interact with medical records.19 Two examples of currently used AI applications in the ED involving early detection of acute subdural hematomas and sepsis are illustrated in the Table.20,21
Table.
Examples of implemented AI applications in the ED.
| Case 1: acute subdural hematoma detection |
| You work in a critical access ED with limited radiology resources. A 78-year-old male is brought to the ED by ambulance following a ground-level fall. His medication list includes aspirin and warfarin daily. The patient remains confused, and your initial assessment reveals a Glasgow Coma Scale score of 14. A statim head CT scan is performed, which suggests a possible subdural hematoma. To assess the urgency of transfer, you use the recently introduced AI tool, Viz.ai, to confirm the diagnosis and evaluate the need for possible surgical management. Colasurdo et al20 reported that this convolutional neural network application could quantify hematoma thickness, volume, and midline shift with 95.1% accuracy (95% CI, 91.7%-97.3%). The AI tool identifies a midline shift, and based on this finding, you arrange for the patient to be transferred to a tertiary care facility with neurosurgical coverage before the radiologist (reading remotely) confirms the diagnosis. |
| Case 2: sepsis detection |
| You work in a large volume ED that newly introduced the “Sepsis Watch,” which is an AI-driven system to detect sepsis. An 86-year-old male with chronic obstructive pulmonary disease checks in, reporting fever and shortness of breath. His initial vital signs include a temperature of 38.6 °C, heart rate of 120 cpm, blood pressure of 130/70 mm Hg, and oxygen saturation of 92% at room air. His lung examination reveals wheezing and crackles. While beginning to order diagnostics in your electronic health record, a sepsis watch alert is triggered. Because of this, you add blood cultures, initiate antibiotics, and preorder a 3-hour repeat lactic acid level to meet the SEP-1 core measure compliance. The model was developed to identify sepsis early, giving clinicians enough time to confirm the diagnosis and complete the required treatment bundles. It uses both static data, such as comorbidities and demographics, as well as dynamic data, such as medications, laboratory results, and vital signs. The model was designed for use in adult patients in the ED, from triage through admission, samples the patients’ data multiple times, and detects relationships among the variables that could signal the onset of sepsis.21 |
AI, artificial intelligence; CT, computed tomography; ED, emergency department; SEP-1, severe sepsis and septic shock early management bundle.
AI interventions for ED operations can take place at any stage of patient progression during the ED visit: arrival, within the ED, and at discharge.22 These AI systems are designed to accelerate the rate at which information becomes available and decisions are made in the ED. For example, at the time of patient arrival, most EDs in North America use the Emergency Severity Index (ESI) to triage patients. A large majority of these patients are triaged to ESI level 3. AI algorithms have been shown to be able to safely upgrade or downgrade ESI level 3 patients using information available in the electronic health record (EHR) at the time of triage, thereby facilitating ED resource allocation.23
At the bedside, AI has been used to review and abstract pertinent information from EHRs in real-time.24 These solutions may lead to improvements in efficiency and may also be paired with new ways to evaluate vital signs, such as examining heart rate variability and how it could shed light on a patient’s pathology and clinical course.25 ML is also being added to point-of-care ultrasound imaging to help physicians identify crucial findings, such as free fluid in the abdominal cavity or pulmonary edema, and is also being applied to identify fractures on radiographs.26, 27, 28 NLP has been used to identify potential diagnoses from clinical notes. NLP models have been developed that can identify sepsis, influenza, and acute appendicitis with high precision using data available in clinical notes.29, 30, 31
At the time of disposition, AI has been used to predict return ED visits and hospital admissions for improved capacity management.22,32 ML models have also been used to predict specific outcomes, such as in-hospital mortality from sepsis.33
4. Limitations of AI
Because AI at its core is a mathematical model, it is constrained to many of the limitations of other models with which physicians are more familiar. The adage “garbage in, garbage out” for systematic reviews or any data analysis is also true with AI. Any AI prediction is dependent on the volume and quality of the data used to create it. The significance of avoiding improper data collection practices cannot be overstated, as they have the potential to introduce bias and generate a distorted view of model performance, particularly when dealing with subpopulations that might not be adequately represented in the study dataset. AI also does not inherently mean “better.” Standard statistical analysis techniques have been shown to perform just as well or better than AI in many situations.34, 35, 36
Although the benefits of AI have been lauded by the general press and specialists alike, there has been an important parallel conversation on the ethics of AI in general and its application in medicine in particular.37 To ensure the positive impact of AI on patients' well-being requires a paramount emphasis on ethical considerations throughout its design, development, and deployment. The integration of ML into health care introduces a range of ethical concerns, particularly due to the potential of models to exacerbate existing health disparities. Ethical issues pertaining to AI in health care encompass matters of privacy and surveillance, bias and discrimination, and the extent of human involvement.38,39 Bias in AI occurs for a variety of reasons, and some have discussed the domains of social and statistical (systemic) bias within AI.40 Both kinds of bias result in false assumptions. Social bias is thought to be due to using data that are collected, which reflects preexisting biases in health care, such as poor predictions of cardiac outcomes in populations known to present with atypical chest pain.41 Statistical bias can occur when the data that are accumulated differs significantly from the population it is used to predict.42 A prominent example of bias in AI was discussed in 2019 when Obermeyer et al43 published a study examining how a common algorithm used to identify patients who could benefit from high-risk care management had a significant racial bias. It was found that for a given risk score in the algorithm, there was a dramatic difference in the severity of illness between certain race groups. The difference was noted because the algorithm utilized health care expenditures and not health care needs. This created a bias built on access to care and care affordability.43
When AI meets the standard for software as a medical device (SaMD) by being integrated into software used to treat our patients, it is also subject to regulation as a medical device by agencies such as the United States Food and Drug Administration (FDA).44,45 This can lead to outdated software as it cannot be prospectively changed based on new data, although methods for continuously updated models do exist. The FDA’s guidelines for SaMD provide a framework for assessing the safety and effectiveness of software that performs medical functions independently of a hardware medical device. These guidelines emphasize the importance of clinical evaluation, risk management, and transparency in the software’s intended use and performance. The FDA categorizes SaMD based on the potential risk to patients, requiring varying levels of regulatory scrutiny, and encourages a lifecycle approach to SaMD development, ensuring continuous monitoring and improvement.45 Other federal agencies also have AI policies; for example, the Office of the National Coordinator for Health Information Technology's Health Data, Technology, and Interoperability final rule includes federal requirements for AI and ML-based predictive software in health care.46,47 AI regulation continues to evolve, particularly as newer forms of AI appear, including generative AI.48 One popular framework for describing high-quality AI algorithms is “FAVES,” aiming for AI that is fair, appropriate, valid, effective, and safe. AI assurance laboratories are one approach under consideration to ensure that AI meets these standards. These AI assurance laboratories would apply nationwide standards to AI testing.49
5. Future Directions and Further Readings
AI is poised to revolutionize the delivery of patient care in emergency medicine, bringing about a paradigm shift in triage, diagnosis, and treatment. As technology advances, AI-driven tools are expected to play a pivotal role in expediting the identification of life-threatening conditions by analyzing patient data in real time, enabling quicker and more accurate medical decision making. AI algorithms can aid clinicians in predicting patient course and deterioration, allowing for more proactive interventions. Moreover, the integration of AI-powered imaging analysis can enhance diagnostic accuracy, enabling the swift identification of injuries and abnormalities. AI has the potential to significantly enhance evidence-based medicine by revolutionizing the way information is gathered, analyzed, and applied. It can personalize patient care, offer real-time decision support, and facilitate research. The primary hurdle facing AI in health care domains does not pertain to the capabilities of the technology. Instead, it lies in working toward its integration into everyday clinical practices. As AI's role in emergency medicine expands, ensuring seamless integration with existing protocols, addressing concerns related to data security and privacy, and maintaining a balance between human judgment and AI-driven recommendations will be crucial. For extensive adoption to occur, regulatory approvals for AI systems are necessary. More rigorous evaluation and evidence for AI algorithms is imperative. Integration with EHRs is essential, and standardization to a considerable degree must be achieved to ensure uniform functionality. Clinicians need to be trained in their use and reimbursed for using AI in clinical practice. Additionally, continuous updates in the field are imperative.50,51
AI holds the dual potential of mitigating physician burnout and enhancing the overall patient experience. Nevertheless, there have been few research studies focusing on understanding patient perspectives toward AI technologies in health care.52,53 One such study in South Korea found that patients perceived information to be of more importance if an AI tool was used.53 As AI systems become more integrated into emergency medicine practice, it inevitably raises questions about their impact on the patient-clinician relationship. Patients might be concerned that AI-driven decisions lack the human touch and empathy that they value in their physicians. Striking the right balance between AI-assisted care and the personalized, compassionate aspect of clinical practice will be crucial in shaping the evolving patient-clinician relationship in the age of AI.
As AI continues to develop with increasing applications in emergency medicine, it must move beyond novelty and prove its ability to improve health care outcomes, health care delivery, and the health care workplace. In the future, nationwide ED databases will likely allow for large-scale prediction models. No matter the promise of new technology, physicians must work with AI experts to direct the tools created to be useful and improve the care of patients. Hasty adoption of untested AI tools may potentially result in inadvertent errors by health care professionals, jeopardizing patient well-being, eroding trust in AI, and thus impeding the long-term benefits and global implementation of such technologies.
For emergency physicians looking to expand their knowledge about AI, there are a variety of avenues. Dr Eric Topol released the book Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again about the promises of AI in healthcare. For those wanting further information on how to best interpret or scrutinize research studies using AI methods, The Journal of the American Medical Association offers an installment of their Users’ Guide to the Medical Literature titled, “How to Read Articles That Use Machine Learning.”54 Lastly, more training and education can also be found through the American Medical Informatics Association (https://amia.org/). Clinical informatics fellowships and board certification in clinical informatics are additional options for physicians looking to expand their involvement in AI.55
Funding and Support
None.
Footnotes
Supervising Editor: Adam Landman, MD, MS
On behalf of the American College of Emergency Physicians Artificial Intelligence Subcommittee of the ACEP Research Committee
References
- 1.Koteluk O., Wartecki A., Mazurek S., Kołodziejczak I., Mackiewicz A. How do machines learn? Artificial intelligence as a new era in medicine. J Pers Med. 2021;11(1):32. doi: 10.3390/jpm11010032. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Rajkomar A., Dean J., Kohane I. Machine learning in medicine. N Engl J Med. 2019;380(14):1347–1358. doi: 10.1056/NEJMra1814259. [DOI] [PubMed] [Google Scholar]
- 3.Liu P.R., Lu L., Zhang J.Y., Huo T.T., Liu S.X., Ye Z.W. Application of artificial intelligence in medicine: an overview. Curr Med Sci. 2021;41(6):1105–1115. doi: 10.1007/s11596-021-2474-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Manning C. Artificial intelligence definitions. Stanford University human-centered artificial intelligence. https://hai.stanford.edu/sites/default/files/2020-09/AI-Definitions-HAI.pdf
- 5.Ramlakhan S., Saatchi R., Sabir L., et al. Understanding and interpreting artificial intelligence, machine learning and deep learning in emergency medicine. Emerg Med J. 2022;39(5):380–385. doi: 10.1136/emermed-2021-212068. [DOI] [PubMed] [Google Scholar]
- 6.Ramlakhan S.L., Saatchi R., Sabir L., et al. Building artificial intelligence and machine learning models: a primer for emergency physicians. Emerg Med J. 2022;39(5) doi: 10.1136/emermed-2022-212379. [DOI] [PubMed] [Google Scholar]
- 7.Kuhn M., Julia Silge J. Software for Modeling. Tidy modeling with R. https://www.tmwr.org/software-modeling
- 8.Sarkar A. Is explainable AI a race against complexity? Preprint. Posted online May 17, 2022. arXiv 2205.10119. 10.48550/arXiv.2205.10119 [DOI]
- 9.Silge J., Robinson D. Text mining with R. https://www.tidytextmining.com/preface.html
- 10.Hvitfeldt E., Silge J. Supervised machine learning for text analysis in R. https://smltar.com/preface
- 11.Sterling N.W., Patzer R.E., Di M., Schrager J.D. Prediction of emergency department patient disposition based on natural language processing of triage notes. Int J Med Inform. 2019;129:184–188. doi: 10.1016/j.ijmedinf.2019.06.008. [DOI] [PubMed] [Google Scholar]
- 12.Vaswani A. Shazeer N, Parmar N, et al. Attention is all you need. Posted online June 12, 2017. arXiv 1706.03762. 10.48550/arXiv.1706.03762 [DOI]
- 13.Salvagno M., Taccone F.S., Gerli A.G. Artificial intelligence hallucinations. Crit Care. 2023;27(1):180. doi: 10.1186/s13054-023-04473-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Friedman C., Hripcsak G. Natural language processing and its future in medicine. Acad Med. 1999;74(8):890–895. doi: 10.1097/00001888-199908000-00012. [DOI] [PubMed] [Google Scholar]
- 15.Zeng N., Zuo S., Zheng G., Ou Y., Tong T. Editorial: artificial intelligence for medical image analysis of neuroimaging data. Front Neurosci. 2020;14:480. doi: 10.3389/fnins.2020.00480. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Giordano C., Brennan M., Mohamed B., Rashidi P., Modave F., Tighe P. Accessing artificial intelligence for clinical decision-making. Front Digit Health. 2021;3 doi: 10.3389/fdgth.2021.645232. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Lee S., Lam S.H., Hernandes Rocha T.A., et al. Machine learning and precision medicine in emergency medicine: the basics. Cureus. 2021;13(9) doi: 10.7759/cureus.17636. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Coiera E., Kocaballi B., Halamka J., Laranjo L. The digital scribe. NPJ Digit Med. 2018;1:58. doi: 10.1038/s41746-018-0066-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Yuba M., Iwasaki K. Systematic analysis of the test design and performance of AI/ML-based medical devices approved for triage/detection/diagnosis in the USA and Japan. Sci Rep. 2022;12(1) doi: 10.1038/s41598-022-21426-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Colasurdo M., Leibushor N., Robledo A., et al. Automated detection and analysis of subdural hematomas using a machine learning algorithm. J Neurosurg. 2022;138(4):1077–1084. doi: 10.3171/2022.8.JNS22888. [DOI] [PubMed] [Google Scholar]
- 21.Duke Health Duke’s augmented intelligence system helps prevent sepsis in the ED. https://physicians.dukehealth.org/articles/dukes-augmented-intelligence-system-helps-prevent-sepsis-ed
- 22.Berlyand Y., Raja A.S., Dorner S.C., et al. How artificial intelligence could transform emergency department operations. Am J Emerg Med. 2018;36(8):1515–1517. doi: 10.1016/j.ajem.2018.01.017. [DOI] [PubMed] [Google Scholar]
- 23.Levin S., Toerper M., Hamrock E., et al. Machine-learning-based electronic triage more accurately differentiates patients with respect to clinical outcomes compared with the emergency severity index. Ann Emerg Med. 2018;71(5):565–574.e2. doi: 10.1016/j.annemergmed.2017.08.005. [DOI] [PubMed] [Google Scholar]
- 24.Chi E.A., Chi G., Tsui C.T., et al. Development and validation of an artificial intelligence system to optimize clinician review of patient records. JAMA Netw Open. 2021;4(7) doi: 10.1001/jamanetworkopen.2021.17391. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Chiew C.J., Liu N., Tagami T., Wong T.H., Koh Z.X., Ong M.E.H. Heart rate variability based machine learning models for risk prediction of suspected sepsis patients in the emergency department. Medicine (Baltimore) 2019;98(6) doi: 10.1097/MD.0000000000014197. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Leo M.M., Potter I.Y., Zahiri M., Vaziri A., Jung C.F., Feldman J.A. Using deep learning to detect the presence and location of hemoperitoneum on the focused assessment with sonography in trauma (FAST) examination in adults. J Digit Imaging. 2023;36(5):2035–2050. doi: 10.1007/s10278-023-00845-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Gottlieb M., Patel D., Viars M., Tsintolas J., Peksa G.D., Bailitz J. Comparison of artificial intelligence versus real-time physician assessment of pulmonary edema with lung ultrasound. Am J Emerg Med. 2023;70:109–112. doi: 10.1016/j.ajem.2023.05.029. [DOI] [PubMed] [Google Scholar]
- 28.Lindsey R., Daluiski A., Chopra S., et al. Deep neural network improves fracture detection by clinicians. Proc Natl Acad Sci U S A. 2018;115(45):11591–11596. doi: 10.1073/pnas.1806905115. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.López Pineda A., Ye Y., Visweswaran S., Cooper G.F., Wagner M.M., Tsui F.R. Comparison of machine learning classifiers for influenza detection from emergency department free-text reports. J Biomed Inform. 2015;58:60–69. doi: 10.1016/j.jbi.2015.08.019. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Hsieh C.H., Lu R.H., Lee N.H., Chiu W.T., Hsu M.H., Li Y.C. Novel solutions for an old disease: diagnosis of acute appendicitis with random forest, support vector machines, and artificial neural networks. Surgery. 2011;149(1):87–93. doi: 10.1016/j.surg.2010.03.023. [DOI] [PubMed] [Google Scholar]
- 31.Horng S., Sontag D.A., Halpern Y., Jernite Y., Shapiro N.I., Nathanson L.A. Creating an automated trigger for sepsis clinical decision support at emergency department triage using machine learning. PLoS One. 2017;12(4) doi: 10.1371/journal.pone.0174708. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Hong W.S., Haimovich A.D., Taylor R.A. Predicting hospital admission at emergency department triage using machine learning. PLoS One. 2018;13(7) doi: 10.1371/journal.pone.0201016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Taylor R.A., Pare J.R., Venkatesh A.K., et al. Prediction of in-hospital mortality in emergency department patients with sepsis: a local big data-driven, machine learning approach. Acad Emerg Med. 2016;23(3):269–278. doi: 10.1111/acem.12876. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Wu T., Wei Y., Wu J., Yi B., Li H. Logistic regression technique is comparable to complex machine learning algorithms in predicting cognitive impairment related to post intensive care syndrome. Sci Rep. 2023;13(1):2485. doi: 10.1038/s41598-023-28421-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Christodoulou E., Ma J., Collins G.S., Steyerberg E.W., Verbakel J.Y., Van Calster B. A systematic review shows no performance benefit of machine learning over logistic regression for clinical prediction models. J Clin Epidemiol. 2019;110:12–22. doi: 10.1016/j.jclinepi.2019.02.004. [DOI] [PubMed] [Google Scholar]
- 36.Sun Z., Dong W., Shi H., Ma H., Cheng L., Huang Z. comparing machine learning models and statistical models for predicting heart failure events: a systematic review and meta-analysis. Front Cardiovasc Med. 2022;9 doi: 10.3389/fcvm.2022.812276. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Lopez-Jimenez F., Attia Z., Arruda-Olson A.M., et al. Artificial intelligence in cardiology: present and future. Mayo Clin Proc. 2020;95(5):1015–1039. doi: 10.1016/j.mayocp.2020.01.038. [DOI] [PubMed] [Google Scholar]
- 38.Liyanage H., Liaw S.T., Jonnagaddala J., et al. Artificial intelligence in primary health care: perceptions, issues, and challenges. Yearb Med Inform. 2019;28(1):41–46. doi: 10.1055/s-0039-1677901. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Gottlieb M., Kline J.A., Schneider A.J., Coates W.C. ChatGPT and conversational artificial intelligence: ethics in the eye of the beholder. Am J Emerg Med. 2023;70:191. doi: 10.1016/j.ajem.2023.06.023. [DOI] [PubMed] [Google Scholar]
- 40.Chen Y., Clayton E.W., Novak L.L., Anders S., Malin B. Human-centered design to address biases in artificial intelligence. J Med Internet Res. 2023;25 doi: 10.2196/43251. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.El-Menyar A., Zubaid M., Sulaiman K., et al. Atypical presentation of acute coronary syndrome: a significant independent predictor of in-hospital mortality. J Cardiol. 2011;57(2):165–171. doi: 10.1016/j.jjcc.2010.11.008. [DOI] [PubMed] [Google Scholar]
- 42.Parikh R.B., Teeple S., Navathe A.S. Addressing bias in artificial intelligence in health care. JAMA. 2019;322(24):2377–2378. doi: 10.1001/jama.2019.18058. [DOI] [PubMed] [Google Scholar]
- 43.Obermeyer Z., Powers B., Vogeli C., Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019;366(6464):447–453. doi: 10.1126/science.aax2342. [DOI] [PubMed] [Google Scholar]
- 44.U.S. Food and Drug Administration Artificial intelligence/machine learning (AI/ML)-based software as a medical device (SaMD) action plan. https://www.fda.gov/media/145022/download
- 45.U.S. Food and Drug Administration Software as a medical device (SaMD) https://www.fda.gov/medical-devices/digital-health-center-excellence/software-medical-device-samd
- 46.Health Affairs A regulation to promote responsible AI in health care. https://www.healthaffairs.org/content/forefront/regulation-promote-responsible-ai-health-care
- 47.Office of the National Coordinator for Health IT Health data, technology, and interoperability: certification program updates, algorithm transparency, and information sharing (HTI-1) final rule. https://www.healthit.gov/topic/laws-regulation-and-policy/health-data-technology-and-interoperability-certification-program
- 48.Blumenthal D., Patel B. The regulation of clinical artificial intelligence. NEJM AI. 2024;1(8) doi: 10.1056/AIpc2400545. [DOI] [Google Scholar]
- 49.Shah N.H., Halamka J.D., Saria S., et al. A nationwide network of health AI assurance laboratories. JAMA. 2024;331(3):245–249. doi: 10.1001/jama.2023.26930. [DOI] [PubMed] [Google Scholar]
- 50.Davenport T., Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J. 2019;6(2):94–98. doi: 10.7861/futurehosp.6-2-94. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Mueller B., Kinoshita T., Peebles A., Graber M.A., Lee S. Artificial intelligence and machine learning in emergency medicine: a narrative review. Acute Med Surg. 2022;9(1) doi: 10.1002/ams2.740. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52.Haan M., Ongena Y.P., Hommes S., Kwee T.C., Yakar D. A qualitative study to understand patient perspective on the use of artificial intelligence in radiology. J Am Coll Radiol. 2019;16(10):1416–1419. doi: 10.1016/j.jacr.2018.12.043. [DOI] [PubMed] [Google Scholar]
- 53.Park H.J. Patient perspectives on informed consent for medical AI: a web-based experiment. Digit Health. 2024;10 doi: 10.1177/20552076241247938. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.Liu Y., Chen P.C., Krause J., Peng L. How to read articles that use machine learning: users’ guides to the medical literature. JAMA. 2019;322(18):1806–1816. doi: 10.1001/jama.2019.16489. [DOI] [PubMed] [Google Scholar]
- 55.American Medical Informatics Association Careers and certifications. https://amia.org/careers-certifications



