Abstract
The objective of this study was to assess the performance of an artificial intelligence (AI) algorithm in detecting intracranial haemorrhages (ICHs) on non-contrast CT scans (NCCT). Another objective was to gauge the department’s acceptance of said algorithm. Surveys conducted at three and nine months post-implementation revealed an increase in radiologists’ acceptance of the AI tool with an increasing performance. However, a significant portion still preferred an additional physician given comparable cost. Our findings emphasize the importance of careful software implementation into a robust IT architecture.
Keywords: Intracranial haemorrhage, deep learning and artificial intelligence
Introduction
Non-contrast cranial computed tomography (NCCT) is widely available and remains the gold standard imaging modality for diagnosing intracranial haemorrhage (ICH), offering an excellent sensitivity and specificity. 1 The overall incidence of spontaneous ICH is 24.6 per 100,000. 2 Early detection of ICH enables physicians to start treatment early. Very early intervention may help arrest active bleeding and limit haematoma expansion. 3 Mortality rates following ICH vary depending on the duration of follow-up, with short-term mortality typically lower than long-term mortality ranging from 34% to up to 53% in a one-year period.4–8
Research and clinical application of artificial intelligence (AI) has increased in the context of healthcare and become more relevant. AI has found use in the setting of the emergency room (ER), 9 in the context of triaging patients with a potential diagnosis of COVID-19 10 and even in order to diagnose acute conditions such as strokes. 11
Multiple studies have evaluated the use of AI in the acute setting for the detection of ICH on non-contrast cranial CT (NCCT), demonstrating promising diagnostic accuracy. Among the commercially available products, the AIDOC-software has been investigated in multiple peer-reviewed publications, with high sensitivity and specificity consistently reported across a variety of clinical settings. AIDOC is able to demonstrate a sensitivity between 84 and 99% and specificity between 93 and 99% in detecting intracranial haemorrhages.12–19 These papers highlight not only AIDOC’s diagnostic performance but also its potential to assist with triage and prioritization in emergency radiology.
We hypothesized that the introduction of the AIDOC AI software would lead to moderate acceptance within the department and serve as a useful adjunct tool.
Materials and methods
We obtained institutional review board (IRB) approval for this prospective analysis.
Study design
We implemented a commercially available, AI-based detection tool for algorithm-aided detection and prioritization of ICH in August 2021. In October 2021, we performed an analysis of all NCCTs within a period of three weeks and compared radiologist reports with the notifications of the AI algorithm. Both 4 mm and 1 mm thick slices acquired on a Siemens Somatom CT scanner were sent to AIDOC for analysis. Finally, we have included a case study.
A neuroradiologist with 20 years of experience reviewed all CT scans and served as the reference standard, determining whether the original interpreting radiologist or the AI algorithm provided the correct assessment. After this three-week analysis, we conducted a survey in our university department (general hospital of Salzburg) to gauge all radiologists’ acceptance of the AI software. The survey included a question evaluating whether the algorithm was perceived to provide more value than the addition of a resident position (Figure 1).
Figure 1.
How the algorithm marks an intracranial haemorrhage (ICH) using computer vision.
In May 2022, that is, after an additional six months of constant usage of the AI algorithm in clinical practice by the radiologists, another three-week analysis and a second user survey were performed as described above.
We then compared our calculated sensitivity and specificity to other studies analyzing the AI algorithm.
False-negatives were defined as cases in which either the original interpreting radiologist or the AI algorithm misinterpreted the scan as not containing an ICH, when there was an ICH. False-positives were defined as cases, where the original interpreting radiologist or the AI algorithm detected an ICH, although no ICH was detectable on the scan. Discrepancies were defined as reports that didn’t contain an ICH according to the original radiologist’s interpretation, while they were flagged positive by the AI algorithm.
AI algorithm
The cloud-based AI algorithm (AI-Doc, Tel Aviv, Israel) is a medical grade deep learning algorithm, which is meant to be used as an interpretative aid to detect ICH. The algorithm is FDA approved and CE marked. The algorithm automatically analyzes every scan in the background at all hours and flags scans suspicious of containing ICH, prompting a widget popping up. The algorithms accuracy has been proved by multiple studies.17,20,21 The scans are sent pseudonymized to the provider. When the analysis by the artificial intelligence algorithm is completed, the scans with the results are sent back and then the pseudonymization is reversed by the hospital’s IT department. During the duration of the study, the head of the radiology and the head of the neuroradiology department held biweekly meetings to give AI-Doc feedback.
Statistical analysis
For the statistical analysis, we used sensitivity, specificity, positive predictive value, and negative predictive value. We used SPSS to perform the statistical analyses.
Results
First analysis
From 22.09.2021 until 14.10.2021, a total of n = 372 NCCTs were read and interpreted by radiologists and the AI algorithm. There were no NCCTs, which fit in our time frame that were excluded. Out of these 372, n = 43 depicted an ICH. Out of the total 372, n = 34 scans were not interpreted by the AI algorithm equalling 9%. Out of those 34 scans, 11 included ICHs equalling 32% of scans not interpreted by the AI algorithm depicting an ICH (Table 1).
Table 1.
Table depicting 372 NCCTs interpreted by both an algorithm and a radiologist during the first analysis.
n = 372 | True positive | True negative | False positive | False negative | Sensitivity | Specificity | Positive Predictive Value | Negative Predictive Value |
---|---|---|---|---|---|---|---|---|
AI algorithm | 39 | 317 | 13 | 3 | 92.9% | 96% | 75% | 99% |
Radiologist | 38 | 329 | 0 | 5 | 88.4% | 100% | 100% | 99% |
Second analysis
From 04.04.2022 until 24.04.2022, a total of n = 351 NCCTs were read and interpreted by radiologists and the AI algorithm. Out of these 351, n = 51 depicted an ICH. Out of the total 351 there were no NCCTs, which fit in our time frame that were excluded (Table 2).
Table 2.
Table depicting 351 NCCTs interpreted by both an algorithm and a radiologist during the second analysis.
n = 351 | True positive | True negative | False positive | False negative | Sensitivity | Specificity | Positive Predictive Value | Negative Predictive Value |
---|---|---|---|---|---|---|---|---|
AI algorithm | 40 | 289 | 11 | 11 | 78.4% | 96.3% | 78.4% | 96.3% |
Radiologist | 44 | 299 | 1 | 7 | 86.3% | 99.7% | 97.8% | 97.7% |
First survey
Results of the first survey conducted in November 2021 with 27 out of 40 radiologists working at the university department in Salzburg, three months after having started using the algorithm, have been presented in Table 3.
Table 3.
Table visualizing the results of the first survey rated based on agreement.
n = 27 | Strongly agree | Agree | Somewhat agree | Disagree | Strongly disagree |
---|---|---|---|---|---|
The algorithm makes me feel more secure in my reporting | 2 (7.4%) | 7 (25.9%) | 11 (40.7%) | 4 (14.8%) | 3 (11.1%) |
The algorithm is practicable for use with the PACS | 2 (7.4%) | 13 (48.2%) | 9 (33.3%) | 3 (11.1%) | 0 (0%) |
The algorithm increasing the speed of my reporting | 1 (3.70%) | 0 (0%) | 4 (14.8%) | 12 (44.4%) | 10 (37.0%) |
The algorithm makes me feel pressed for time | 0 (0%) | 1 (3.70%) | 1 (3.70%) | 8 (29.6%) | 17 (62.9%) |
The algorithm detected ICH that I overlooked | 2 (7.4%) | 0 (0%) | 4 (14.8%) | 12 (44.4%) | 9 (33.3%) |
Should the algorithm remain as a tool in the department? | 5 (18.5%) | 7 (25.9%) | 7 (25.9%) | 5 (18.5%) | 3 (11.1%) |
Considering the added value of the algorithm, would an additional resident position be more useful? | 11 (40.7%) | 8 (29.6%) | 2 (7.4%) | 3 (11.1%) | 3 (11.1%) |
Second survey
Results of the second survey conducted in May 2022, six months after the first survey and a total of nine months after having started using the algorithm, have been presented in Table 4. The survey was again conducted with radiologists in the university department in Salzburg (9 out of 40 radiologists) as follows:
Table 4.
Table visualizing the results of the second survey rated based on agreement.
n = 9 | Strongly agree | Agree | Somewhat agree | Disagree | Strongly disagree |
---|---|---|---|---|---|
The algorithm makes me feel more secure in my reporting | 2 (22.2%) | 5 (55.6%) | 1 (11.1%) | 0 (0%) | 1 (11.1%) |
The algorithm is practicable for use with the IMPAX | 3 (33.3%) | 4 (44.4%) | 1 (11.1%) | 1 (11.1%) | 0 (0%) |
The algorithm increasing the speed of my reporting | 1 (11.1%) | 2 (22.2%) | 1 (11.1%) | 2 (22.2%) | 3 (33.3%) |
The algorithm makes me feel pressed for time | 0 (0%) | 0 (0%) | 1 (11.1%) | 3 (33.3%) | 5 (55.6%) |
The algorithm detected ICH that I overlooked | 1 (11.1%) | 1 (11.1%) | 3 (33.3%) | 1 (11.1%) | 3 (33.3%) |
Should the algorithm remain as a tool in the department? | 3 (33.3%) | 2 (22.2%) | 2 (22.2%) | 1 (11.1%) | 1 (11.1%) |
Considering the added value of the algorithm, would an additional resident position be more useful? | 5 (55.6%) | 1 (11.1%) | 1 (11.1%) | 0 (0%) | 2 (22.2%) |
Case study
Small and clearly visible post-traumatic intra-axial haemorrhage in the right frontal lobe on CCT was detected by the radiologist and AIDOC. In the follow-up CCT performed the next day, the AI software AIDOC no longer detected this small haemorrhage, although it had not changed (Figure 2).
Figure 2.
A Small and clearly visible post-traumatic intra-axial haemorrhage in the right frontal lobe on NCCT, which was recognized by the radiologist and AIDOC. In the follow-up NCCT the next day, AIDOC did not recognize or flag the haemorrhage, although it had not changed.
Discussion
How did the algorithm preform?
The diagnostic detection rate of the AI software for intracranial haemorrhages was like that of the interpreting radiologists, but the software occasionally misclassified benign findings as haemorrhages that radiologists could readily attribute to other changes. As mentioned in the Introduction section, previous studies found 84–99% and specificity between 93 and 99% in detecting intracranial haemorrhages,12–19 which is consistent with our results. The only notable discrepancy in our study was a lower sensitivity for the algorithm in the second analysis (78.4%) compared to previous studies. The algorithm’s sensitivity in the first analysis and the specificity in both analyses were in range (Figure 3).
Figure 3.
A known parenchyma defect labelled as an intracranial haemorrhage (ICH) by the artificial intelligence algorithm (false positive).
It is important to remark that not all the NCCTs were labelled or analyzed by the algorithm and there was no form of feedback as to which scans were left out. In the first analysis out of 372 NCCTs, n = 34 scans were not interpreted by the AI algorithm equalling 9%. Out of those 34 scans, 11 included an ICH equalling 32% of scans not interpreted by the AI algorithm including an ICH.
Our study shows that the integration of a specific automated diagnostic head CT scan into the more complex, general CT scan schemes of a full-service hospital revealed unexpected workflow problems of the local components of the AI system. We found that approximately one-third of intracranial haemorrhages would have gone undetected if automated CT diagnosis alone had been relied upon, due to non-obvious errors in software integration. Therefore, successful implementation of such software within a clinical IT architecture is a critical factor. The smooth clinical performance of the AI software is highly dependent on the full functionality of the underlying technical infrastructure.
Although our study suggests that the AI solution is able to identify small haemorrhages, some of these findings may be inconsequential. For example, a small but clearly visible right frontal haemorrhage in NCCT was initially detected by AIDOC, but not the next day. This suggests an inconsistency in the detection performance of the software, and further research into the reliability of such automated detection systems seems necessary.
The majority of the analyzed NCCTs were in the context of trauma. Not a single case of intraventricular haemorrhage was detected by the algorithm. The algorithm demonstrated strong performance in identifying discrete subarachnoid haemorrhages. The algorithm did not provide any classification of the haemorrhage subtype detected. This is relevant information since different types of ICH have different clinical consequences. The algorithm also did not analyze the size of ICH. An ICH volume below 30cm3 had the lowest mortality rates, compared to a volume of 30–60m3 and a volume of above 60cm3.7,23,24
Opinion of department
The study shows a trend towards higher acceptance of the algorithm over time, as reflected in the survey responses. This increased acceptance correlates with an improved, observed performance of the algorithm during the second analysis, as several parameters reflecting its diagnostic contribution improved within the six-month interval. Importantly, this refers to enhanced performance in clinical use and not to any technical modification of the algorithm itself. In particular, the two survey items stating that the algorithm makes the radiologist feel more secure in reporting and that the algorithm should remain as a tool in the department increased markedly. The proportion of respondents who agreed or strongly agreed that the algorithm made them feel more secure rose from 33.3% to 77.8%. Whether the algorithm should remain as a tool in the department increased from 44.5% to 55.6%. These findings suggest that acceptance of the algorithm within the department improved significantly over time.
Approximately two-thirds of respondents strongly agreed or agreed in both surveys that an additional resident position would be more useful. Unlike the AI tool, a resident can identify not only ICH but also other urgent findings such as a hydrocephalus or a midline shift. This was elected as a question, because the algorithm costs approximately the same as the yearly salary of a resident. Lastly, in the first survey, 7.41% strongly agreed or agreed that the algorithm detected an ICH that they would have overlooked, while in the second survey 22.2% strongly agreed or agreed.
Possible benefits of an algorithm
We identify several potential clinical applications for the algorithm. The algorithm alerts physicians to possible ICH, which aids in rapidly evaluating NCCTs in the emergency setting. In some centres, suspected ICH cases can be marked as a priority study allowing for interpretation within an hour from scan acquisition. 25 This is particularly important, as acute deterioration due to haemorrhage expansion commonly occurs within initial 3–4.5 hours of symptom onset.26–28 Algorithms can also support physicians in triaging patients. 29
Artificial intelligence has even been found to be beneficial for reducing the throughput read time and can assist in reprioritizing the reading work list of radiologists. 22 In our radiology department, we implemented a deep learning algorithm, which detects ICH on NCCTs and alerts radiologists via a pop-up notification widget highlighting the suspected finding. This enables radiologists to reprioritize their exam list accordingly and focus their attention on a potentially time-sensitive emergency.
Over the past decades, the amount of ordered exams within the hospital setting has increased considerably. 30 The rising workload may contribute to an increase in the error rate of radiologists. 31 AI algorithms have been shown to improve diagnostic accuracy of interpreting physicians, particularly among junior physicians with less experience in the reading of NCCT scans. 32 Even board-certified radiologists, when compared to sub-specialist neuroradiologists, can demonstrate discrepancies of up to 12%.33,34 Double reading by a sub-specialist, in general, led to high rates of changed reports. 35 Given that a second look by a radiologist improves the rate of correct reporting, the same must be true when algorithms are used for second look.36–39
Strengths and weaknesses of the study
Several limitations should be addressed when interpreting our results. First, the examinations in this study were conducted in a single hospital. Therefore, while we have demonstrated that our results generalize well to independent datasets obtained at our hospital centre, further work is necessary to evaluate performance on a variety of vendors and scanning protocols at other institutions. Whereas most centres typically use a 4 mm slice thickness, we used 1 mm slices for analysis. It remains unclear whether section thickness contributed to the under detection of some of the urgent findings. And further evaluation of the algorithm with thin-section images would be needed to assess for improved sensitivity. The overall sample size in our study was relatively small and more research is required.
Another limitation of the study is the relatively low number of respondents in the initial survey. The change in the number of respondents in the second survey also may limit generalizability of the findings and increase the risk of bias.
Conclusion
The experience with the AIDOC software at our institute demonstrates that this AI software serve as a useful supporting tool in the narrow diagnostic segment of ICH detection but requires expert supervision. When given the choice, radiologists expressed preference in an additional physician compared to the AI algorithm. Highest level of careful software implementation into the complex IT architecture is vital.
Appendix.
List of abbreviations
- ICH
intracranial haemorrhage
- NCCT
non-contrast cranial computed tomography
- ER
emergency room
- IRB
institutional review board
- AI
artificial intelligence
- IT
information technology
- NLP
natural language processing
Author note: This publication is approved by all authors.
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding: The authors received no financial support for the research, authorship, and/or publication of this article.
Ethical statement
Ethical approval
All human studies have been approved by the appropriate ethics committee and have therefore been performed in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki.
ORCID iDs
Selim Abed https://orcid.org/0009-0003-0989-0249
Johannes Pfaff https://orcid.org/0000-0003-0672-5718
References
- 1.Greenberg JCSM, Anderson CS, Becker K, et al. Guidelines for the management of spontaneous intracerebral hemorrhage. Stroke 2015; 46(7): 2032–2060. DOI: 10.1161/STR.0000000000000069. [DOI] [PubMed] [Google Scholar]
- 2.van Asch CJ, Luitse MJ, Rinkel GJ, et al. Incidence, case fatality, and functional outcome of intracerebral haemorrhage over time, according to age, sex, and ethnic origin: a systematic review and meta-analysis. Lancet Neurol 2010; 9(2): 167–176. DOI: 10.1016/S1474-4422(09)70340-0. [DOI] [PubMed] [Google Scholar]
- 3.Mayer SA. Ultra-early hemostatic therapy for intracerebral hemorrhage. Stroke 2003; 34(1): 224–229. DOI: 10.1161/01.str.0000046458.67968.e4. [DOI] [PubMed] [Google Scholar]
- 4.Sun T, Yuan Y, Wu K, et al. Trends and patterns in the global burden of intracerebral hemorrhage: a comprehensive analysis from 1990 to 2019. Front Neurol 2023; 14: 1241158. DOI: 10.3389/fneur.2023.1241158. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Nätynki AM, Huhtakangas J, Bode M, et al. Long‐term survival after primary intracerebral hemorrhage: a population‐based case-control study spanning a quarter of a century. Eur J Neurol 2021; 28(11): 3663–3669. DOI: 10.1111/ene.14988. [DOI] [PubMed] [Google Scholar]
- 6.Qureshi SMD, Talarico R, Tanuseputro P, et al. Intracerebral hemorrhage incidence, mortality, and association with oral anticoagulation use. Stroke 2021; 52(5): 1673–1681. DOI: 10.1161/STROKEAHA.120.032550. [DOI] [PubMed] [Google Scholar]
- 7.Broderick JP, Brott TG, Duldner JE, et al. Volume of intracerebral hemorrhage. A powerful and easy-to-use predictor of 30-day mortality. Stroke 1993; 24(7): 987–993. DOI: 10.1161/01.STR.24.7.987. [DOI] [PubMed] [Google Scholar]
- 8.Haverbusch MLM, Sekar P, Kissela B, et al. Long-term mortality after intracerebral hemorrhage. Neurology 2006; 66(8): 1182–1186. DOI: 10.1212/01.wnl.0000208400.08722.7c. [DOI] [PubMed] [Google Scholar]
- 9.Weisberg EM, Chu LC, Fishman EK. The first use of artificial intelligence (AI) in the ER: triage not diagnosis. Emerg Radiol 2020; 27(4): 361–366. DOI: 10.1007/s10140-020-01773-6. [DOI] [PubMed] [Google Scholar]
- 10.Choi CKJW, Jiao Z, Wang D, et al. An automated COVID-19 triage pipeline using artificial intelligence based on chest radiographs and clinical data. npj Digit Med 2022; 5(1): 5. DOI: 10.1038/s41746-021-00546-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Chow JEDS, Nagamine M, Takhtawala RS, et al. Artificial intelligence and acute stroke imaging. Am J Neuroradiol 2021; 42(1): 2–11. DOI: 10.3174/ajnr.A6883. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Ginat DT. Analysis of head CT scans flagged by deep learning software for acute intracranial hemorrhage. Neuroradiology 2020; 62(3): 335–340. DOI: 10.1007/s00234-019-02330-w. [DOI] [PubMed] [Google Scholar]
- 13.Seifi AA, Seifi A. Diagnostic accuracy of deep learning for intracranial hemorrhage detection in non-contrast brain CT scans: a systematic review and meta-analysis. J Clin Med 2025; 14(7): 2377. DOI: 10.3390/jcm14072377. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Voter AF, Meram E, Garrett JW, et al. Diagnostic accuracy and failure mode analysis of a deep learning algorithm for the detection of intracranial hemorrhage. J Am Coll Radiol JACR 2021; 18(8): 1143–1152. DOI: 10.1016/j.jacr.2021.03.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Seyam M, Weikert T, Sauter A, et al. Utilization of artificial intelligence-based intracranial hemorrhage detection on emergent noncontrast CT images in clinical workflow. Radiol Artif Intell 2022; 4(2): e210168. DOI: 10.1148/ryai.210168. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Fletcher AC, Bigwood S, Ratnakanthan P, et al. Retrospective analysis and prospective validation of an AI-based software for intracranial haemorrhage detection at a high-volume trauma centre. Sci Rep 2022; 12(1): 19885. DOI: 10.1038/s41598-022-24504-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Ojeda P, Zawaideh M, Mossa-Basha M, et al. The utility of deep learning: evaluation of a convolutional neural network for detection of intracranial bleeds on non-contrast head computed tomography studies. In: Angelini ED, Landman BA. (eds). Medical Imaging 2019: Image Processing. San Diego, United States: SPIE, 2019, p. 128. DOI: 10.1117/12.2513167. [DOI] [Google Scholar]
- 18.Tanwar CHM, Elkassem AA, Sturdivant A, et al. Prospective evaluation of artificial intelligence triage of intracranial hemorrhage on noncontrast head CT examinations. Am J Roentgenol 2024; 223(5): e2431639. DOI: 10.2214/AJR.24.31639. [DOI] [PubMed] [Google Scholar]
- 19.Chodakiewitz Y. Prescreening for intracranial hemorrhage on CT head scans with an AI-based radiology workflow triage tool: an accuracy study. J Med Diagn Methods 2019; 8(2). [Online]. Available: https://www.longdom.org/open-access/prescreening-for-intracranial-hemorrhage-on-ct-head-scans-with-an-aibased-radiology-workflow-triage-tool-an-accuracy-study-44351.html#:∼:text=Results%3A_Algorithm_ensitivity_was_96.2 [Google Scholar]
- 20.Chodakiewitz Y, Maya M, Pressman B. Abstract WP63: AI-Augmented review of CT brain exams to determine rate of missed diagnoses of intracranial hemorrhage by practicing neuroradiologists. Stroke. 2020; 51(Suppl_1). DOI: 10.1161/str.51.suppl_1.WP63. [DOI] [Google Scholar]
- 21.Rao B, Zohrabian V, Cedeno P, et al. Utility of artificial intelligence tool as a prospective radiology peer reviewer - detection of unreported intracranial hemorrhage. Acad Radiol 2021; 28(1): 85–93. DOI: 10.1016/j.acra.2020.01.035. [DOI] [PubMed] [Google Scholar]
- 22.Xi TJY, Stehel E, Browning T, et al. Active reprioritization of the reading worklist using artificial intelligence has a beneficial effect on the turnaround time for interpretation of head CT with intracranial hemorrhage. Radiol. Artif. Intell 2021; 3(2): e200024. DOI: 10.1148/ryai.2020200024. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Hillal A, Ullberg T, Ramgren B, et al. Computed tomography in acute intracerebral hemorrhage: neuroimaging predictors of hematoma expansion and outcome. Insights Imaging 2022; 13(1): 180. DOI: 10.1186/s13244-022-01309-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Panchal HN, Shah MS, Shah DS. Intracerebral hemorrhage score and volume as an independent predictor of mortality in primary intracerebral hemorrhage patients. Indian J Surg 2015; 77(S2): 302–304. DOI: 10.1007/s12262-012-0803-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Fornwalt MRBK, Mongelluzzo GJ, Suever JD, et al. Advanced machine learning in action: identification of intracranial hemorrhage on computed tomography scans of the head with clinical workflow integration. npj Digit Med 2018; 1(1): 9. DOI: 10.1038/s41746-017-0015-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Broderick SMJ, Hennerici M, Brun NC, et al. Hematoma growth is a determinant of mortality and poor outcome after intracerebral hemorrhage. Neurology 2006; 66(8): 1175–1181. DOI: 10.1212/01.wnl.0000208408.98482.99. [DOI] [PubMed] [Google Scholar]
- 27.Kazui S, Naritomi H, Yamamoto H, et al. Enlargement of spontaneous intracerebral hemorrhage. Stroke 1996; 27(10): 1783–1787. DOI: 10.1161/01.STR.27.10.1783. [DOI] [PubMed] [Google Scholar]
- 28.Qureshi A, Palesch Y, Investigators ATACH II. Expansion of recruitment time window in antihypertensive treatment of acute cerebral hemorrhage (ATACH) II trial. J. Vasc. Interv. Neurol 2012; 5(supp): 6–9. [PMC free article] [PubMed] [Google Scholar]
- 29.Delshad S, Dontaraju VS, Chengat V. Artificial intelligence-based application provides accurate medical triage advice when compared to consensus decisions of healthcare providers. Cureus 2021. DOI: 10.7759/cureus.16956. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Ha SNT, Bulsara MK, Doust J, et al. Increasing use of CT requested by emergency department physicians in tertiary hospitals in Western Australia 2003-2015: an analysis of linked administrative data. BMJ Open 2021; 11(3): e043315. DOI: 10.1136/bmjopen-2020-043315. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Kwee RJMRM, Kwee RM. Workload for radiologists during on-call hours: dramatic increase in the past 15 years. Insights Imaging 2020; 11(1): 121. DOI: 10.1186/s13244-020-00925-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Huang THK, Huang HK. Effect of a computer-aided diagnosis system on clinicians' performance in detection of small acute intracranial hemorrhage on computed tomography. Acad Radiol 2008; 15(3): 290–299. DOI: 10.1016/j.acra.2007.09.022. [DOI] [PubMed] [Google Scholar]
- 33.Lal NR, Murray UM, Eldevik OP, et al. Clinical consequences of misinterpretations of neuroradiologic CT scans by on-call radiology residents. AJNR Am. J. Neuroradiol 2000; 21(1): 124–129. [PMC free article] [PubMed] [Google Scholar]
- 34.Wysoki MG, Nassar CJ, Koenigsberg RA, et al. Head trauma: CT scan interpretation by radiology residents versus staff radiologists. Radiology 1998; 208(1): 125–128. DOI: 10.1148/radiology.208.1.9646802. [DOI] [PubMed] [Google Scholar]
- 35.Geijer HM, Geijer M. Added value of double reading in diagnostic radiology,a systematic review. Insights Imaging 2018; 9(3): 287–301. DOI: 10.1007/s13244-018-0599-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Ghosh SR, Tanamala S, Biviji M, et al. Deep learning algorithms for detection of critical findings in head CT scans: a retrospective study. Lancet 2018; 392(10162): 2388–2396. DOI: 10.1016/S0140-6736(18)31645-3. [DOI] [PubMed] [Google Scholar]
- 37.Sander ZZMS, Rai B, Titahong CN, et al. Using artificial intelligence to read chest radiographs for tuberculosis detection: a multi-site evaluation of the diagnostic accuracy of three deep learning systems. Sci Rep 2019; 9(1): 15000. DOI: 10.1038/s41598-019-51503-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Rodriguez-Ruiz A, Lång K, Gubern-Merida A, et al. Stand-alone artificial intelligence for breast cancer detection in mammography: comparison with 101 radiologists. JNCI J. Natl. Cancer Inst 2019; 111(9): 916–922. DOI: 10.1093/jnci/djy222. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Buist TDSM, Lee CI, Nikulin Y, et al. Evaluation of combined artificial intelligence and radiologist assessment to interpret screening mammograms. JAMA Netw Open 2020; 3(3): e200265. DOI: 10.1001/jamanetworkopen.2020.0265. [DOI] [PMC free article] [PubMed] [Google Scholar]