Abstract
Stroke poses a significant health challenge, with ischemic and hemorrhagic subtypes requiring timely and accurate diagnosis for effective management. Traditional imaging techniques like CT have limitations, particularly in early ischemic stroke detection. Recent advancements in artificial intelligence (AI) offer potential improvements in stroke diagnosis by enhancing imaging interpretation. This meta-analysis aims to evaluate the diagnostic accuracy of AI systems compared to human experts in detecting ischemic and hemorrhagic strokes. The review was conducted following PRISMA-DTA guidelines. Studies included stroke patients evaluated in emergency settings using AI-Based models on CT or MRI imaging, with human radiologists as the reference standard. Databases searched were MEDLINE, Scopus, and Cochrane Central, up to January 1, 2024. The primary outcome measured was diagnostic accuracy, including sensitivity, specificity, and AUROC and the methodological quality was assessed using QUADAS-2. Nine studies met the inclusion criteria and were included. The pooled analysis for ischemic stroke revealed a mean sensitivity of 86.9% (95% CI: 69.9%–95%) and specificity of 88.6% (95% CI: 77.8%–94.5%). For hemorrhagic stroke, the pooled sensitivity and specificity were 90.6% (95% CI: 86.2%–93.6%) and 93.9% (95% CI: 87.6%–97.2%), respectively. The diagnostic odds ratios indicated strong diagnostic efficacy, particularly for hemorrhagic stroke (DOR: 148.8, 95% CI: 79.9–277.2). AI-Based systems exhibit high diagnostic accuracy for both ischemic and hemorrhagic strokes, closely approaching that of human radiologists. These findings underscore the potential of AI to improve diagnostic precision and expedite clinical decision-making in acute stroke settings.
Keywords: Artificial intelligence, stroke diagnosis, ischemic stroke, hemorrhagic stroke, neuroimaging
Introduction
Stroke is one of the most critical cerebrovascular disorders, representing a major global health challenge due to its potential for causing severe brain damage, long-term disability, and even death. Characterized by an abrupt disruption of blood flow to the brain, stroke can be broadly classified into two main types: ischemic strokes, caused by blockages in blood vessels, and hemorrhagic strokes, resulting from the rupture of vessels.1,2 As the second leading cause of death worldwide, stroke is responsible for an estimated 5.5 million deaths annually and remains a significant contributor to disability. 3 Prompt diagnosis and effective treatment are essential in mitigating long-term complications and reducing the global burden of stroke. 4
Globally, acute stroke accounts for nearly 10% of all deaths, with approximately 13.7 million new stroke cases reported each year, according to the World Stroke Organization.2,3 While stroke can affect individuals of any age, over half of cases occur in people aged 70 years and older. 2 Notably, about 87% of strokes are ischemic in nature, highlighting the predominance of vessel blockages in stroke pathology. 5 Common risk factors include atrial fibrillation, hypertension, diabetes, hyperlipidemia, smoking, physical inactivity, obesity, and unhealthy diets, making stroke prevention a complex but critical endeavor. 6
Early and accurate diagnosis of stroke is crucial for timely intervention, as delays can exacerbate outcomes and increase the risk of long-term disability. Non-contrast computed tomography (CT) remains the most widely used imaging modality in emergency settings due to its speed and utility in detecting intracranial hemorrhage. 7 However, its limitations in detecting ischemic stroke early, due to low soft-tissue contrast, present a significant challenge. 8 Recent advancements in artificial intelligence (AI), including machine learning and deep learning, offer new opportunities to address these limitations by improving diagnostic precision and clinical decision-making. 9 AI algorithms can analyze complex imaging data to detect subtle patterns and predict outcomes, offering the potential to revolutionize stroke diagnosis and treatment. 10
The goal of this comprehensive review is to explore the role of AI in revolutionizing stroke diagnosis and management. By examining recent developments in AI-driven approaches, this review seeks to highlight how these innovations can enhance diagnostic accuracy, streamline patient triage, and ultimately reduce the morbidity and mortality associated with stroke.
Materials and methods
Selection criteria
This systematic review was conducted in accordance with the PRISMA-DTA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses of Diagnostic Test Accuracy Studies) guidelines. 11 Since this review is based on previously published studies, no ethical approval or patient consent was required.
The articles included in this review were selected based on the PICOS (Population, Intervention, Comparator, Outcome, Study design) framework. The population of interest comprised stroke patients presenting in emergency departments. The intervention examined was the application of AI-based models in stroke imaging, particularly in modalities such as CT or MRI. The comparator for the analysis was an expert human interpreter, such as a radiologist, who served as the gold standard for diagnostic evaluation. The primary outcome was the diagnostic test accuracy of AI-based models, focusing on measures such as sensitivity, specificity, true positive (TP), true negative (TN), false positive (FP), and false negative (FN) rates. The study design included original research articles, such as randomized controlled trials (RCTs), prospective cohort studies, retrospective cohort studies, and cross-sectional studies, published in English.
Case reports, case series, review articles, meta-analyses, and editorials were excluded. Additionally, studies that did not use CT or MRI as the primary imaging modality or that lacked complete data on diagnostic accuracy estimates, including TP, TN, FP, and FN values, were not considered for inclusion.
The review focused on two main types of stroke: ischemic and hemorrhagic. Ischemic stroke was defined as the blockage of a blood vessel, which limits the blood supply to the brain. 7 Hemorrhagic stroke was defined as the rupture of a blood vessel, leading to the leakage of blood into the intracranial cavity. 7
Data extraction
The literature search and review process were conducted by two independent investigators following the predefined selection criteria. In instances where disagreements arose during the review, these were resolved by consultation with the senior author. The search strategy incorporated a combination of terms using Boolean operators (AND, OR) as follows: ((stroke) OR (infarction) OR (hemorrhage)) AND ((artificial intelligence) OR (neural network) OR (predictive algorithm) OR (deep learning) OR (DNN) OR (machine learning) OR (deep Bayesian) OR (bimodal learning) OR (contrast learning) OR (pyramid learning) OR (CNN)). We searched the electronic databases MEDLINE, Scopus, and the Cochrane Central Register of Controlled Trials from inception until January 1, 2024.
Outcome measures
The primary outcome of this review was the diagnostic test accuracy of AI-based systems in detecting ischemic and hemorrhagic strokes. This was measured by the area under the receiver operating characteristic curve (AUROC). Additionally, sensitivity (calculated as the ratio of TP to the sum of TP and FN), specificity (calculated as the ratio of TN to the sum of TN and FP), diagnostic odds ratio (DOR), positive likelihood ratio (LR+), and negative likelihood ratio (LR−) were calculated.
Secondary outcomes included positive predictive value (PPV), defined as the ratio of TP to the sum of TP and FP, and negative predictive value (NPV), defined as the ratio of TN to the sum of TN and FN.
Statistical analyses
During the data extraction process for analysis, the main focus was on collecting data for TP, TN, FP, and FN. In cases where specific data points were unavailable, sensitivity and specificity values were utilized to estimate and convert the data into TP, TN, FN, and FP values. The DOR was calculated for constructing forest plots. Bivariate analysis of sensitivity and specificity was employed using maximum likelihood estimation to generate AUROC. This analysis involved 5000 iterations, with a burn-in period of 1000 iterations and 3 chains. LR + and LR- were calculated using a random-effect model based on the DerSimonian-Laird method, along with their corresponding 95% confidence intervals (CI). A correction factor of 0.5 was applied in the calculations. Subgroup analysis was not performed since all the included studies employed the same reference tests, namely human experts. Similar methodology has also been employed previously.12,13
Methodological quality
Two independent reviewers utilized the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool to evaluate the methodological quality of the selected studies. The QUADAS-2 tool consists of four primary domains that assess both the risk of bias and the applicability of the study’s results. These domains include “patient selection,” “index test,” “reference standard,” and “flow and timing” of samples/patients. 14
For each domain, the risk of bias was evaluated and classified as high, unclear, or low based on specific criteria. A response of “Yes” indicated that the criterion was satisfied, “Unclear” if fulfillment was unclear, and “No” if the criterion was not met. Additionally, the first three domains were assessed for applicability, categorizing concerns as high, low, or unclear. Any disagreements between the two reviewers were resolved through consensus.
Results
Study selection
A total of 4434 records were identified through database searches, while no records were identified from registers. Prior to screening, 1766 duplicate records were removed, leaving 2668 records for the title and abstract screening. During this initial screening phase, 2390 records were excluded based on relevance, resulting in 278 reports being sought for full-text retrieval. All 278 reports were successfully retrieved and assessed for eligibility. Of these, 269 reports were excluded for the following reasons: 103 due to a different intervention, 68 due to reporting different outcomes, 63 for using the wrong study design, 26 due to incomplete or mixed data, and 9 for other reasons. Ultimately, 9 studies15–23 met the inclusion criteria and were included in the final meta-analysis (Figure 1).
Figure 1.
PRISMA Flow Diagram of Study Selection Process This flowchart illustrates the systematic study selection process in the meta-analysis. Out of 4434 records identified, 1766 duplicates were removed. After screening, 278 full-text reports were assessed, with 9 studies included in the final meta-analysis.
Study characteristics
The included studies varied in terms of sample size and diagnostic performance metrics for both hemorrhagic and ischemic strokes. Sample sizes ranged from 160 to over 4900 patients, with Kundisch et al. 17 and Seyam et al. 22 reporting the largest datasets for hemorrhagic stroke, with 4901 and 4450 patients, respectively. The median age across studies ranged from 57 to 72 years, with male patients comprising between 48% and 57% of the study populations. Diagnostic performance metrics, where reported, demonstrated strong sensitivity and specificity for AI-based systems in detecting ischemic and hemorrhagic strokes (Table 1). For instance, Abedi et al. 15 reported a sensitivity of 80.0% and specificity of 86.2% for ischemic stroke detection, while Amukotuwa et al. 16 achieved 94% sensitivity and 76% specificity. In terms of overall diagnostic accuracy, Kundisch et al. 17 and Seyam et al. 22 both reported an accuracy of 93.0%, highlighting the effectiveness of AI tools in clinical stroke diagnosis. Studies such as Lo et al. 18 and Morey et al. 19 reported more modest diagnostic metrics but demonstrated the continued value of AI integration in stroke care (Table 2).
Table 1.
Summary of AI/ML algorithms used in the included studies.
Study | Year | AI/ML algorithm used | Key features of the model | Dataset type (Internal vs. External) |
---|---|---|---|---|
Abedi et al. | 2017 | Artificial neural network | Supervised learning with backpropogation | Data collected from two tertiary stroke centers |
Amukotuwa et al. | 2019 | RAPID CTA | Fully automated large vessel occlusion detection | Retrospective analysis from a single regional hospital |
Kundisch et al. | 2021 | AIDOC AI | Deep learning model for intracranial hemorrhage detection | Data collected from 18 different hospitals |
Lo et al. | 2021 | Deep convolutional neural networks | Used AlexNet, Inception-v3, and ResNet-101 with transfer learning | Tested on an independent dataset from an external institution |
Morey et al. | 2021 | Viz LVO AI | AI-based stroke triage system for LVO detection on CTA | Data from a single stroke center |
Rava et al. | 2021 | Cannon’s AUTOStroke solution | Machine learning model for hemorrhage detection | Data collected retrospectively from a single institution |
Schmitt et al. | 2022 | AI-based segmentation model | Used for hemorrhage detection in non-contrast CT | Data collected from multiple hospitals |
Seyam et al. | 2022 | AI-based ICH detection tool | Integrated into emergency workflow to detect intracranial hemorrhage | Single institution clinical workflow implementation |
Yu et al. | 2020 | U-net deep learning model | Segmentation model for ischemic stroke lesion prediction | Trained on multicenter datasets |
Table 2.
Demographic.
Study | Year | Sample size (hemorrhagic/ Ischemic) | Age (mean/Median) | Gender distribution (male %) | Sensitivity | Specificity |
---|---|---|---|---|---|---|
Abedi et al. | 2017 | Hemorrhagic: N/A, ischemic: 260 | 57 (mean) | 48% | 80.0% | 86.2% |
Amukotuwa et al. | 2019 | Hemorrhagic: N/A, ischemic: 477 | Median: 71 | 57% male | 94% | 76% |
Kundisch et al. | 2021 | Hemorrhagic: 4901, ischemic: 4946 | Median: 72 (IQR 56–83) | 52.5% male | 87.2% | 93.9% |
Lo et al. | 2021 | Hemorrhagic: N/A, ischemic: 224 | N/A | N/A | 97.2% | 95.7% |
Morey et al. | 2021 | Hemorrhagic: N/A, ischemic: 646 | Mean: 72.8 | 50% male | 72.3% | 77.6% |
Rava et al. | 2021 | Hemorrhagic: 358, ischemic: N/A | N/A | N/A | N/A | N/A |
Schmitt et al. | 2022 | Hemorrhagic: 160, ischemic: N/A | N/A | N/A | N/A | N/A |
Seyam et al. | 2022 | Hemorrhagic: 4450, ischemic: N/A | N/A | N/A | 87.2% | 93.9% |
Yu et al. | 2020 | Hemorrhagic: N/A, ischemic: 266 | N/A | N/A | N/A | N/A |
Efficacy outcomes
Ischemic stroke
A total of five studies15,16,18,19,23 reported the diagnostic accuracy for diagnosing ischemic stroke. The pooled analysis revealed a mean sensitivity of 86.9% (95% CI: 69.9% –95%) and a specificity of 88.6% (95% CI: 77.8%–94.5%), as illustrated by the HSROC curve (Figure 2). The positive likelihood ratio (LR+) was calculated to be 7.6 (95% CI: 3.6–15.7), while the negative likelihood ratio (LR-) was estimated at 0.14 (95% CI: 0.05–0.37). The diagnostic odds ratio (DOR) was found to be 51.5 (95% CI: 13.05–204.06), indicating a high overall diagnostic efficacy of AI-based models in identifying ischemic stroke.
Figure 2.
HSROC Curve for Ischemic Stroke Diagnosis This HSROC curve shows the diagnostic accuracy of AI-based models for ischemic stroke, with a pooled sensitivity of 86.9% and specificity of 88.6% based on five studies. The curve illustrates the overall diagnostic performance with confidence and predictive regions.
Hemorrhagic stroke
Four studies17,20–22 evaluated the diagnostic accuracy for hemorrhagic stroke. The pooled mean sensitivity and specificity were 90.6% (95% CI: 86.2%–93.6%) and 93.9% (95% CI: 87.6%–97.2%), respectively, as represented in the HSROC curve (Figure 3). The positive likelihood ratio (LR+) was calculated at 14.9 (95% CI: 7.3–30.6), and the negative likelihood ratio (LR-) at 0.1 (95% CI: 0.07–0.144). The diagnostic odds ratio (DOR) for hemorrhagic stroke was 148.8 (95% CI: 79.9–277.2), reflecting the high diagnostic accuracy of AI-based systems in detecting hemorrhagic stroke.
Figure 3.
HSROC Curve for Hemorrhagic Stroke Diagnosis This HSROC curve represents the diagnostic performance of AI-based models for hemorrhagic stroke, with a pooled sensitivity of 90.6% and specificity of 93.9% based on four studies. The curve includes confidence and predictive regions to show variability in diagnostic accuracy.
Quality assessment
The methodological quality of the studies was assessed using the QUADAS-2 tool, focusing on patient selection, index test, reference standard, and flow and timing (Figure 4). Most studies showed a low risk of bias, with Morey et al. 19 and Schmitt et al. 21 having unclear bias in patient selection and flow/timing domains. The index test and reference standard were consistently low-risk across all studies. In terms of applicability, minimal concerns were noted, with all studies rated low concern for index test and reference standard. Schmitt et al. 21 raised minor concerns regarding patient selection, but these were not significant enough to affect the overall conclusions.
Figure 4.
Risk of Bias and Applicability Concerns This figure summarizes the risk of bias and applicability concerns across nine included studies, assessed using the QUADAS-2 tool. Most studies showed a low risk of bias and low applicability concerns in all assessed domains.
Discussion
The primary aim of our study was to assess the diagnostic accuracy of AI-based systems in detecting ischemic and hemorrhagic strokes when compared to human interpretation of neuroimaging. Our meta-analysis included nine studies, with five focused on ischemic stroke and four on hemorrhagic stroke. For ischemic stroke, the pooled sensitivity and specificity of AI systems were 86.9% and 88.6%, respectively, indicating a strong diagnostic capability that closely parallels human performance in stroke diagnosis. Similarly, AI systems demonstrated high accuracy in diagnosing hemorrhagic stroke, with pooled sensitivity and specificity of 90.6% and 93.9%, respectively. These findings highlight the potential of AI as a valuable tool for stroke detection, offering diagnostic accuracy that is comparable to or, in some cases, approaching that of trained radiologists.
Our study’s findings offer several unique contributions when compared to previous meta-analyses. First, Ghozy et al. 24 focused on the diagnostic accuracy of AI systems in detecting M2 segment middle cerebral artery occlusions, reporting a sensitivity of 64% and a specificity of 97%. Their analysis demonstrated that AI systems were more reliable as confirmatory tools rather than exclusionary tools in detecting these occlusions. Similarly, Hu et al. 25 conducted a meta-analysis of AI-based deep learning systems for intracranial hemorrhage detection and segmentation, achieving a pooled sensitivity of 89%, specificity of 91%, and an AUROC of 0.94. In contrast, our study demonstrated higher sensitivity and specificity for both ischemic and hemorrhagic stroke detection, positioning AI as a strong competitor to human interpretation. What sets our study apart is the focus on stroke subtypes (ischemic and hemorrhagic) and the comparison to human diagnosticians, not just as a confirmatory tool but in critical early intervention settings. Our results, particularly for hemorrhagic stroke, where sensitivity and specificity reached 90.6% and 93.9% respectively, indicate that AI can approach the diagnostic accuracy of neuroradiologists, who traditionally achieve sensitivity and specificity upwards of 98%. Additionally, while prior analyses were limited to either hemorrhagic or ischemic stroke detection, our study uniquely examines both, contributing a comprehensive perspective on AI’s utility across different stroke types.
Several factors may have influenced the results of our study and the performance of AI systems in stroke diagnosis. One major factor is the variation in imaging modalities used across studies.26,27 While most studies in our analysis used CT scans, some relied on MRI as the reference standard, particularly for ischemic stroke detection. MRI offers superior soft-tissue contrast compared to CT, which may lead to better sensitivity in detecting smaller ischemic lesions.28–30 This discrepancy in imaging techniques could have influenced the sensitivity and specificity reported in different studies. Additionally, variations in image acquisition parameters, such as slice thickness and scan phase, can significantly affect AI performance. 31 For instance, thinner slices may enhance resolution, allowing AI algorithms to detect subtle abnormalities more easily, whereas thicker slices could obscure important diagnostic details, resulting in lower sensitivity. 25
Another factor that may have contributed to the variability in results is the quality of training datasets used to develop the AI algorithms. 32 Studies that utilized large, balanced datasets with diverse patient populations likely trained more robust models, leading to higher diagnostic accuracy. 33 In contrast, algorithms trained on smaller, less diverse datasets may be prone to overfitting or underfitting, which can limit their generalizability across broader patient populations. 33 Furthermore, artifacts such as beam hardening and partial volume effects, especially in regions like the posterior fossa and near the skull, may have reduced the accuracy of AI in detecting hemorrhagic lesions.34,35 These artifacts are more common in CT scans, particularly when lower-quality imaging techniques are used.24,25 Additionally, the size and location of the lesions themselves are critical factors—AI models may struggle to detect smaller lesions or lesions in anatomically complex regions, which may also account for variations in sensitivity and specificity across studies.25,36
Looking forward, there are several key areas for further research and development in the use of AI for stroke diagnosis. One crucial direction is the standardization of imaging protocols and AI algorithm training. 37 Ensuring consistency in image acquisition parameters, such as slice thickness and scan timing, across different clinical settings will help improve the generalizability of AI models.38,39 Additionally, developing larger, more diverse, and balanced datasets will reduce the risk of overfitting and increase the robustness of these algorithms across various patient populations and imaging conditions. 39 Future studies should also focus on conducting multicenter randomized controlled trials to validate AI systems in real-world clinical environments. This would provide stronger evidence of AI’s efficacy in improving patient outcomes, particularly in reducing diagnostic times and expediting early interventions in stroke care.
Moreover, AI’s role in the clinical workflow should be expanded beyond diagnostics. AI could be integrated into predictive models for stroke risk in high-risk populations, providing earlier identification and intervention.40,41 As AI models continue to improve, their use could extend into decision support, guiding treatment plans and predicting patient recovery trajectories. Incorporating AI into routine clinical practice will also necessitate updated guidelines on data security, bias prevention, and ongoing post-implementation monitoring. 42 Addressing these challenges will ensure that AI-based stroke management systems remain reliable and beneficial as the technology continues to evolve.
The included studies employed a range of machine learning and AI models, from traditional artificial neural networks (ANN) to modern deep learning architectures such as ResNet and U-Net (Table 1). Early studies used ANN, which, while effective, lacked the advanced feature extraction capabilities of newer deep learning models. In contrast, recent studies leveraged convolutional neural networks and deep segmentation architectures, significantly improving sensitivity and specificity. Notably, models such as ResNet-101 and U-Net have demonstrated superior performance in medical imaging due to their deeper architecture and transfer learning capabilities. The evolution of AI in stroke detection has enhanced diagnostic accuracy, reduced false positives, and improved generalizability across datasets. However, differences in model selection introduce heterogeneity, potentially influencing meta-analysis outcomes. Future studies should consider standardizing AI model comparisons to assess their true clinical impact systematically.
One of the key strengths of our study is the comprehensive nature of the meta-analysis, which included studies focused on both ischemic and hemorrhagic stroke. This allowed us to assess AI performance across different stroke types, providing a more complete picture of its diagnostic capabilities. Additionally, the inclusion of multiple imaging modalities, such as CT and MRI, reflects real-world clinical practice and offers insights into AI’s adaptability to various diagnostic settings. Our study also benefited from a robust methodological assessment using the QUADAS-2 tool, ensuring that the included studies were of high quality and had minimal bias.
However, our study also has limitations. First, the heterogeneity in imaging techniques and AI algorithms used across the studies may have contributed to variability in the results. The lack of standardization in image acquisition protocols, such as slice thickness and timing, could have influenced the sensitivity and specificity reported. Additionally, many of the included studies relied on retrospective data, which may limit the generalizability of the findings to prospective, real-world clinical settings. Finally, while we focused on diagnostic accuracy, our study did not explore other important factors such as cost-effectiveness, implementation challenges, or the potential for AI to reduce clinician workload, all of which are crucial considerations for integrating AI into routine stroke care.
Conclusion
In summary, our meta-analysis demonstrates that AI-based systems show strong diagnostic accuracy in detecting both ischemic and hemorrhagic strokes, with performance metrics that closely approach those of human radiologists. While AI cannot yet replace human expertise, it has the potential to significantly reduce diagnostic times, improve patient flow, and support early interventions in stroke care. As AI technology continues to evolve, its integration into clinical workflows, coupled with further standardization and larger datasets, will be essential for realizing its full potential in enhancing stroke diagnosis and management.
Footnotes
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding: The authors received no financial support for the research, authorship, and/or publication of this article.
ORCID iDs
Yumna Fatima https://orcid.org/0009-0000-4012-5113
S. Umar Hasan https://orcid.org/0000-0002-9128-1658
Data Availability Statement
All data generated or analyzed during this study are included in this published article.*
References
- 1.Murphy SJ, Werring DJ. Stroke: causes and clinical features. Medicine 2020; 48: 561–566. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Johnson CO, Nguyen M, Roth GA. Global, regional, and national burden of stroke, 1990–2016: a systematic analysis for the global burden of disease study 2016. Lancet Neurol 2019; 18: 439–458. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Campbell B, Khatri P. Stroke. Lancet 2020; 396: 129–142. [DOI] [PubMed] [Google Scholar]
- 4.Esenwa C, Gutierrez J. Secondary stroke prevention: challenges and solutions. Vasc Health Risk Manag 2015; 11: 437–450. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Paul S, Candelario-Jalil E. Emerging neuroprotective strategies for the treatment of ischemic stroke: an overview of clinical and preclinical studies. Exp Neurol 2021; 335: 113518. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Hardy M, Harvey H. Artificial intelligence in diagnostic imaging: impact on the radiography profession. Br J Radiol 2020; 93: 20190840. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Shafaat O, Sotoudeh H. Stroke imaging. StatPearls [internet]. StatPearls Publishing, 2023. [PubMed] [Google Scholar]
- 8.Mainali S, Darsie ME, Smetana KS. Machine learning in action: stroke diagnosis and outcome prediction. Front Neurol 2021; 12: 734345. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Lee E-J, Kim Y-H, Kim N, et al. Deep into the brain: artificial intelligence in stroke imaging. J Stroke 2017; 19: 277–285. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Cai T, Ni H, Yu M, et al. DeepStroke: an efficient stroke screening framework for emergency rooms with multimodal adversarial deep learning. Med Image Anal 2022; 80: 102522. [DOI] [PubMed] [Google Scholar]
- 11.McInnes MD, Moher D, Thombs BD, et al. Preferred reporting items for a systematic review and meta-analysis of diagnostic test accuracy studies: the PRISMA-DTA statement. JAMA 2018; 319: 388–396. [DOI] [PubMed] [Google Scholar]
- 12.Pervez A, Hasan SU, Hamza M, et al. Diagnostic accuracy of tests for tuberculous pericarditis: a network meta-analysis. Indian J Tubercul 2024; 71: 185–194. [DOI] [PubMed] [Google Scholar]
- 13.Hasan SU, Pervez A, Usmani SUR, et al. Comparative analysis of pinning techniques for supracondylar humerus fractures in paediatrics: a systematic review and meta-analysis of randomized controlled trials. J Orthop 2023; 44: 5–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Whiting PF, Rutjes AW, Westwood ME, et al. QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies. Ann Intern Med 2011; 155: 529–536. [DOI] [PubMed] [Google Scholar]
- 15.Abedi V, Goyal N, Tsivgoulis G, et al. Novel screening tool for stroke using artificial neural network. Stroke 2017; 48: 1678–1681. [DOI] [PubMed] [Google Scholar]
- 16.Amukotuwa SA, Straka M, Smith H, et al. Automated detection of intracranial large vessel occlusions on computed tomography angiography: a single center experience. Stroke 2019; 50: 2790–2798. [DOI] [PubMed] [Google Scholar]
- 17.Kundisch A, Hönning A, Mutze S, et al. Deep learning algorithm in detecting intracranial hemorrhages on emergency computed tomographies. PLoS One 2021; 16: e0260560. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Lo C-M, Hung P-H, Lin D-T. Rapid assessment of acute ischemic stroke by computed tomography using deep convolutional neural networks. J Digit Imag 2021; 34: 637–646. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Morey JR, Zhang X, Yaeger KA, et al. Real-world experience with artificial intelligence-based triage in transferred large vessel occlusion stroke patients. Cerebrovasc Dis 2021; 50: 450–455. [DOI] [PubMed] [Google Scholar]
- 20.Rava RA, Seymour SE, LaQue ME, et al. Assessment of an artificial intelligence algorithm for detection of intracranial hemorrhage. World Neurosurg 2021; 150: e209–e217. [DOI] [PubMed] [Google Scholar]
- 21.Schmitt N, Mokli Y, Weyland C, et al. Automated detection and segmentation of intracranial hemorrhage suspect hyperdensities in non-contrast-enhanced CT scans of acute stroke patients. Eur Radiol 2022; 32: 2246–2254. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Seyam M, Weikert T, Sauter A, et al. Utilization of artificial intelligence–based intracranial hemorrhage detection on emergent noncontrast CT images in clinical workflow. RadiologyRadiol Artif IntellArtificial Intelligence 2022; 4: e210168. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Yu Y, Xie Y, Thamm T, et al. Use of deep learning to predict final ischemic stroke lesions from initial magnetic resonance imaging. JAMA Netw Open 2020; 3: e200772. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Ghozy S, Azzam AY, Kallmes KM, et al. The diagnostic performance of artificial intelligence algorithms for identifying M2 segment middle cerebral artery occlusions: a systematic review and meta-analysis. J Neuroradiol 2023; 50: 449–454. [DOI] [PubMed] [Google Scholar]
- 25.Hu P, Yan T, Xiao B, et al. Deep learning-assisted detection and segmentation of intracranial hemorrhage in noncontrast computed tomography scans of acute stroke patients: a systematic review and meta-analysis. Int J Surg 2024; 110: 3839–3847. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Burke JF, Kerber KA, Iwashyna TJ, et al. Wide variation and rising utilization of stroke magnetic resonance imaging: data from 11 states. Ann Neurol 2012; 71: 179–185. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Srinivasan A, Goyal M, Al Azri F, et al. State-of-the-art imaging of acute stroke. Radiographics 2006; 26: S75–S95. [DOI] [PubMed] [Google Scholar]
- 28.Vilela P, Rowley HA. Brain ischemia: CT and MRI techniques in acute ischemic stroke. Eur J Radiol 2017; 96: 162–172. [DOI] [PubMed] [Google Scholar]
- 29.Lansberg MG, Albers GW, Beaulieu C, et al. Comparison of diffusion-weighted MRI and CT in acute stroke. Neurology 2000; 54: 1557–1561. [DOI] [PubMed] [Google Scholar]
- 30.Chalela JA, Kidwell CS, Nentwich LM, et al. Magnetic resonance imaging and computed tomography in emergency assessment of patients with suspected acute stroke: a prospective comparison. Lancet 2007; 369: 293–298. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Yedavalli VS, Tong E, Martin D, et al. Artificial intelligence in stroke imaging: current and future perspectives. Clin Imag 2021; 69: 246–254. [DOI] [PubMed] [Google Scholar]
- 32.Batista GEAPA, Prati RC, Monard MC. A study of the behavior of several methods for balancing machine learning training data. SIGKDD Explor Newsl 2004; 6: 20–29. [Google Scholar]
- 33.Jordan MI, Mitchell TM. Machine learning: trends, perspectives, and prospects. Science 2015; 349: 255-260. [DOI] [PubMed] [Google Scholar]
- 34.Hammersberg P, Mångård M. Correction for beam hardening artefacts in computerised tomography. J X Ray Sci Technol 1998; 8: 75–93. [PubMed] [Google Scholar]
- 35.Verburg JM, Seco J. CT metal artifact reduction method correcting for beam hardening and missing projections. Phys Med Biol 2012; 57: 2803–2818. [DOI] [PubMed] [Google Scholar]
- 36.Murray NM, Unberath M, Hager GD, et al. Artificial intelligence to diagnose ischemic stroke and identify large vessel occlusions: a systematic review. J Neurointerventional Surg 2020; 12: 156–164. [DOI] [PubMed] [Google Scholar]
- 37.Cobo M, Menéndez Fernández-Miranda P, Bastarrika G, et al. Enhancing radiomics and deep learning systems through the standardization of medical imaging workflows. Sci Data 2023; 10: 732. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Willemink MJ, Koszek WA, Hardell C, et al. Preparing medical imaging data for machine learning. Radiology 2020; 295: 4–15. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Kalra A, Chakraborty A, Fine B, et al. Machine learning for automation of radiology protocols for quality and efficiency improvement. J Am Coll Radiol 2020; 17: 1149–1158. [DOI] [PubMed] [Google Scholar]
- 40.Mouridsen K, Thurner P, Zaharchuk G. Artificial intelligence applications in stroke. Stroke 2020; 51: 2573–2579. [DOI] [PubMed] [Google Scholar]
- 41.Soun JE, Chow DS, Nagamine M, et al. Artificial intelligence and acute stroke imaging. AJNR Am J Neuroradiol 2021; 42: 2–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Chew SY, Koh MS, Loo CM, et al. Making clinical practice guidelines pragmatic: how big data and real world evidence can close the gap. Ann Acad Med Singapore 2018; 47: 523–527. [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
All data generated or analyzed during this study are included in this published article.*