Abstract
The coronavirus disease caused by SARS-Cov-2 is a pandemic with millions of confirmed cases around the world and a high death toll. Currently, the real-time polymerase chain reaction (RT-PCR) is the standard diagnostic method for determining COVID-19 infection. Various failures in the detection of the disease by means of laboratory samples have raised certain doubts about the characterisation of the infection and the spread of contacts.
In clinical practice, chest radiography (RT) and chest computed tomography (CT) are extremely helpful and have been widely used in the detection and diagnosis of COVID-19. RT is the most common and widely available diagnostic imaging technique, however, its reading by less qualified personnel, in many cases with work overload, causes a high number of errors to be committed. Chest CT can be used for triage, diagnosis, assessment of severity, progression, and response to treatment. Currently, artificial intelligence (AI) algorithms have shown promise in image classification, showing that they can reduce diagnostic errors by at least matching the diagnostic performance of radiologists.
This review shows how AI applied to thoracic radiology speeds up and improves diagnosis, allowing to optimise the workflow of radiologists. It can provide an objective evaluation and achieve a reduction in subjectivity and variability. AI can also help to optimise the resources and increase the efficiency in the management of COVID-19 infection.
Keywords: Chest X-ray, Computed tomography, Covid-19, Artificial intelligence, Deep learning
Abstract
La enfermedad causada por el coronavirus SARS-CoV-2 es una pandemia con millones de casos confirmados en todo el mundo, y un alto número de fallecimientos. Actualmente, la reacción en cadena de la polimerasa en tiempo real (RT-PCR) es el método de diagnóstico estándar para determinar la infección por COVID-19. Diversos fracasos en la detección de la enfermedad por medio de muestras de laboratorio han planteado ciertas dudas sobre la caracterización de la infección y la propagación a los contactos.
En la práctica clínica, la radiografía de tórax (RT) y la tomografía computarizada (TC) de tórax son extremadamente útiles y se han utilizado extensamente en la detección y el diagnóstico de la COVID-19. La RT es la técnica de diagnóstico por imagen más común, y la que está más ampliamente disponible, sin embargo, su lectura por personal menos cualificado, en muchos casos con sobrecarga de trabajo, hace que se cometa un gran número de errores. La TC de tórax se puede utilizar para el triaje, el diagnóstico, la evaluación de la gravedad, la progresión y la respuesta al tratamiento. Actualmente, los algoritmos de inteligencia artificial (IA) han resultado prometedores en la clasificación de imágenes, mostrando que pueden reducir los errores de diagnóstico, como mínimo igualando el rendimiento diagnóstico de los radiólogos.
Esta revisión muestra cómo la IA aplicada a la RT acelera y mejora el diagnóstico, lo que permite optimizar el flujo de trabajo de los radiólogos. Puede proporcionar una evaluación objetiva y lograr una reducción de la subjetividad y la variabilidad. La IA también puede ayudar a optimizar los recursos y aumentar la eficiencia en la gestión de la infección por COVID-19.
Palabras clave: Radiografía del tórax, Tomografía computarizada, COVID-19, Inteligencia artificial, Aprendizaje profundo
Introduction
Coronavirus disease (COVID-19), which is caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is a pandemic with millions of confirmed cases and hundreds of thousands of deaths.worldwide.1, 2 This unprecedented global health crisis and the major challenges involved in controlling the virus prompted the World Health Organization to recognise it as a pandemic on March 11th 2020.2
Viral detection using real-time polymerase chain reaction (RT-PCR) is currently the standard method for the diagnosis of COVID-19 infection,3, 4 but in the current emergency, the low sensitivity of the test (60–70%), asymptomatic infection (17.9–33.3%), insufficient availability of sample collection kits and laboratory testing supplies, as well as equipment overload have made it impossible to carry out enough tests for all suspected patients. This has jeopardised the diagnosis of COVID-19 infection and hampered the tracing of contacts of those unconfirmed cases, given the highly contagious nature of the virus.5, 6, 7, 8
In clinical practice, easily accessible imaging test such as chest X-ray (CXR) and computed tomography (CT) are extremely helpful and have been widely used for the detection and diagnosis of COVID-19. In China, numerous cases were identified as suspicious of COVID-19 based on CT findings.9, 10, 11, 12, 13, 14 Although CT has proven a more sensitive diagnostic imaging technique (showing abnormalities in 96% of COVID-19 patients) which can yield data even before infection becomes detectable by RT-PCR, CXR is more accessible, does not require patient transfer, involves a lower radiation dose and has a considerably lower cost.15, 16
Given the limited number of available radiologists, and the high amount of imaging procedures, accurate and fast artificial intelligence (AI) models can therefore be helpful in providing patients with timely medical care.17 AI, essentially Deep Learning (DL), an emerging technology in the field of medical imaging, could actively contribute to combating COVID-19.18
The objective of this article is to review the literature for the main contributions of AI to conventional chest imaging tests used in COVID-19 published up until May 2020.
AI and COVID-19 imaging tests
The term AI is used to describe various technologies which perform processes that mimic human intelligence and cognitive functions. DL is characterised by the use of deep neural networks (many layers with many neurons in each one) which are trained using large amounts of data. Computer algorithms based on Convolutional Neural Networks (CNN, machine learning and processing paradigm which is modelled based on the visual system of animals) have shown promise in image classification and segmentation of different entities, as well as in the recognition of objects within those images. Familiar examples include facial recognition for security applications. In these networks, with respect to programming, there is no need for a well-defined solution to the problem. In the case of imaging-based medical diagnostics for example, there is no explicitly known solution to the problem, only numerous images which have been labelled with a specific diagnosis by human experts. The algorithms are trained using a set of examples for which the output results (outcome or label) are already known. Image recognition systems learn from these outcomes and adjust their parameters to adapt to the input data. Once the model has been properly trained and the internal parameters are consistent, appropriate predictions can be made for new, previously unprocessed data for which the outcome is not yet known. These DL algorithms are capable of autonomous feature learning.19, 20, 21, 22
AI-supported image analysis has shown that the human doctor-AI combination reduces diagnostic errors more successfully than either individually.23
There are several ways to perform segmentation, i.e. separating the lesion from the adjacent tissue to “cut out” the region of interest (ROI); the processes carried out by these algorithms are either:
-
1
Manual (user), prone to intra- and inter-individual variability. Very laborious and time-consuming.
-
2
Semi-automatic, the volume is extracted automatically and the user edits the contours manually.
-
3
Automatic, fast and precise, performed by DL. The training of a robust image segmentation network requires a large enough volume of tagged data which are not always readily accessible.
This is an essential step in image processing and analysis for the assessment and quantification of COVID-19, and the segmented regions can be used to extract characteristics useful for diagnosis or other applications. In COVID-19, complete data are often not available and human knowledge may be required in the training loop, which involves interaction with radiologists.24
CXR-based detection of COVID-19
CXR is the most common and widespread diagnostic imaging method; however, reading of X-ray images by less expert and often overloaded staff can lead to a high error rates. During the pandemic, in the outpatient setting, 20% of CXR read as ‘normal’ were reportedly changed to ‘abnormal’ after a second reading.25
Despite being the first-line imaging method used to investigate cases of COVID-19, the sensitivity of CXR is lower than that of CT, since images may appear normal in early or mild disease. In this regard, 69% of patients have abnormal CXR at initial presentation, vs. 80% at some point after admission.26 The use of AI helps increase diagnostic sensitivity, as shown in Table 1.
Table 1.
Author | Method | Subjects | Task classification | Results |
---|---|---|---|---|
Hemdan32 | COVIDX-Net | 25 covid 25 normal |
Covid/normal | A: 90% |
Wang and Wong14 | COVID-Net | 266 covid 5538 non-covid pneumonia 8066 normal |
Covid/pneumonia/Normal | A: 93.3% S: 91% PPV: 98.9% |
Ioanis13 | COVID-Net VGG-19 |
224 covid 700 bacterial pneumonia 504 normal |
Covid/other Covid/pneumonia/ Normal |
A: 98.75% binary A: 93.48% Three classes |
Narin12 | ResNet 50 Inception V3 Inception-ResNet V2 |
50 covid 50 normal |
Covid/pneumonia | A: 98% A: 97% A: 87% |
Sethy and behra33 | ResNet 50 +SVM | 25 covid 25 pneumonia |
Covid/pneumonia | A: 95.4% |
Ozturk28 | DarkCovid-Net | 125 covid 500 pneumonia 500 normal |
Covid/pneumonia/normal | A: 98.08% Binary A: 87.03% Three classes |
Goshal35 | CNN BAYESIANA | 68 covid 2786 bact pneumonia 1504. viral pneumonia 1583 N |
Covid/other | A: 92.9% |
Zhang27 | ResNet 18 | 70 covid 1008 pneumonia |
Covid/pneumonia | S: 96.0% SP: 70.7% AUC: 0.95 |
Murphy34 | CAD4COVID-Xray | 223 covid 231 non-covid infection/normal |
Covid/other | AUC: 0.81 |
CXR may play an important role in triage for COVID-19, particularly in low-resource settings. Zhang27 presented a model with a performance diagnostic yield similar to that of CT-based methods. Thus, CXR can be considered as an effective tool for rapid and low-cost detection of COVID-19 despite the need to improve on some limitations such as the high number of false-positive results. Ozturk28 was able to diagnose COVID-19 in seconds using raw CXR images and heat maps assessed by an expert radiologist to localise effective regions.
Most current studies tend to use CXR images from small databases, raising questions about the robustness of the methods and the possibility of generalisation in other medical facilities. In this regard, Hurt29 showed that the results of applying the same algorithm to CRX images obtained in different hospitals illustrate a surprising degree of generalisation and robustness of the DL approach. Although these results are not an exhaustive proof of cross-hospital performance, they imply that cross-institutional generalisation is feasible, standing in contrast to what is generally perceived in the field30 despite considerable variation in X-ray technique between the outpatient and inpatient settings.
In addition, most AI systems developed are closed-source and not available for public use. Wang14 used COVID-Net, an open source model developed and tailored for the detection of COVID-19 cases from CXR images applied in COVIDx, an open access benchmark dataset with the largest number of publicly available COVID-19 positive cases and constantly updated. The prototype was built to make one of the following predictions: Non-COVID-19 infection/COVID-19 viral infection/No infection (normal), and thereby help to prioritise PCR testing and to decide which treatment strategy to employ. COVID-19. Net had a lower computational complexity and higher sensitivity than other algorithms, with 92.4% screening accuracy. Using the same public database of fewer than 100 COVID-19 images, Minaee31 trained four neural networks to identify COVID-19 disease on CXR images, most of them achieving a sensitivity of 97% (± 5%) and a specificity of around 90%. Other studies describe classification rates of up to 98%.12, 13, 32, 33
The use of AI seeks to at least match the diagnostic performance of radiologists. Murphy34 thus, demonstrated that the performance of an AI system to detect COVID-19 pneumonia was comparable to that of six independent radiologists, with an operating point of 85% sensitivity and 61% specificity in comparison to RT-PCR as the reference standard for the presence or absence of SARS-CoV-2 viral infection. The AI system correctly classified CXR images as COVID-19 pneumonia with an AUC of 0.81, significantly outperforming each reader (p < 0.001) at their highest possible sensitivities.
Many DL methods focus exclusively on improving the classification or prediction accuracy without quantifying the uncertainty in the decision. Knowing the degree of confidence in a diagnosis is essential for doctors to gain trust in the technology. Ghoshal35 proposed a Bayesian Convolutional Neural Network (BCNN) which allows stronger conclusions to be drawn from the data, combining what is already known about the response to estimate diagnostic uncertainty in predicting COVID-19. Bayesian inference improved the detection accuracy of the standard model from 85.7% to 92.9%. The authors generated relevance maps to illustrate the locations and improve the understanding of the results, facilitating a more informed decision-making process.
Chest CT-based detection of COVID-19
Chest CT can be used to answer various questions in the hospital setting: triage patients, aid diagnosis, and assess disease severity, progression, and response to treatment based on quantification. Table 2 shows studies in which CT scans were used for the diagnosis of COVID-19.
Table 2.
Author | Method | Subjects | Task classification | Results |
---|---|---|---|---|
Jin36 | U-Net++ | 723 covid 413 other |
Covid/other | S: 97.4% SP: 92.2% |
Jin37 | CNN | 496covid 1385 other |
Covid/other | S: 94.1% SP: 95.5% |
Li42 | COV-Net RESNET 50 |
468 covid 1551 nac 1303 other |
Covid/pneumonia/other | S: 90.0% SP: 96.0% AUC: 0.96 |
Chen38 | U-Net++ | 50 covid 55 other |
Covid/other | S: 100% SP: 93.6% A: 95.2% |
Wang41 | M. Inception | 79covid 180 viral pneumonia 15 covid |
Covid/pneumonia Prediction (pcr -) |
A: 82.5% S: 75% SP: 86% A: 85.2% |
Xu44 | RESNet18 | 219 covid 224 influenza a 175normal |
Covid/influenza a/normal | A: 86.7% |
Zheng39 | U-Net+3D DEEP NETWORK | 313covid 229 other |
Covid/other | S: 90.7% SP: 91.1% A: 90.8 AUC: 0.959 |
Shi47 | V-Net RANDOM FOREST |
1658 covid 1027 pneumonia |
Covid/pneumonia | S: 90.7% SP: 83.3% AUC: 87.9% |
Song43 | DRE-NET RESNET 5° |
88 covid 101 pneumonia 86 normal |
Covid/pneumonia/normal | A: 86% AUC: 0.95 |
Bai40 | Efficient-Net B4 | 521 covid 665 pneumonia |
Covid/pneumonia | A: 96% AUC: 0.95 |
Diagnosis
To achieve early and rapid discriminatory diagnosis, i.e. confirm or rule out disease, various systems are making use of segmentation as a pre-classification step with high accuracy and even allowing to shorten radiologists’ reading times by 65%.36, 37, 38, 39
COVID-19 and non-COVID-19 pneumonia share similar characteristics on CT, making accurate differentiation challenging. Bai's model40 achieved greater test accuracy (96% vs. 85%, p < 0.001), sensitivity (95% vs. 79%, p < 0.001) and specificity (96% vs. 88%, p = 0.002) than the radiologists. In addition, the radiologists achieved a higher average test accuracy (90% vs. 85%, p < 0.001), sensitivity (88% vs. 79%, p < 0.001) and specificity (91% vs. 88%, p = 0.001) with the assistance of AI. Comparing the performance of their model with that of two expert radiologists, Wang41 found the former to show much greater accuracy and sensitivity. Each case takes about 10 s to screen. This can be done remotely via a shared public platform. The method differentiates COVID-19 from viral pneumonia with an accuracy of 82.5% vs. 55.8% and 55.4% for the two radiologists. COVID-19 was correctly predicted with an accuracy of 85.2% on CT images from patients with negative initial microbiological tests. To distinguish COVID-19 from community-acquired pneumonia (CAP), Li,42 included CAP and other non-pneumonia CT exams to test the robustness of the model, achieving a sensitivity of 90% and specificity of 96%, with an AUC of 0.96 (p value < 0.001). A deep learning method used by Song43 achieved an accuracy of 86.0% for the differential classification of the type of pneumonia and an accuracy of 94.% for differential diagnosis between pneumonia and its absence. Xu44 achieved an overall accuracy of 86.7% in distinguishing COVID-19 pneumonia from influenza-A viral pneumonia and healthy cases using deep learning techniques.
Singh45 tuned the initial parameters of a multi-objective differential evolution-based CNN to improve workflow efficiency and save time for healthcare professionals. On the other hand, Li et al.,46 using a system installed in their hospital, improved detection efficiency by alerting technicians within 2 min of detection after CT examination, adding that automatic lesion segmentation on CT was also helpful for the quantitative evaluation of COVID-19 progression.
Shi47 used a different approach, segmenting CT images and calculating various quantitative characteristics manually to train the model based on infection size and proportions. This method yielded a sensitivity of 90.7%, although the detection rate was low for patients with a small infection size.
Quantification
Accurate assessment of the extent of the disease is a challenge. In addition, follow-up at intervals of 3–5 days is often recommended to assess disease progression. In the absence of computerised quantification tools, qualitative assessments and a rough description of infected areas are used in radiological reports. Manual calculation of the scores poses a dual challenge: the affected regions are either recorded accurately, a time-consuming process, or assessed subjectively, resulting in low reproducibility of the test. Respiratory severity scales have recently been proposed for COVID-19 which show a correlation between disease progression and severity.10, 48, 49, 50, 51, 52
Accurate and automated score measurements would address speed and reproducibility issues, as shown in Table 3.
Table 3.
Author | Method | Subjects | Task classification | Results |
---|---|---|---|---|
Gozes55 | U-Net | 56 covid 51 normal |
Covid/non covid Progression |
S: 98.2% SP: 92.2% AUC: 0.996 |
Shan24 | VB-Net | 549 covid | Progression | DSC: 0.916 |
Huang53 | U-Net | 126 covid | Follow-up | DSC: 0.848 |
Qi54 | U-Net (LR/RF) | 31 covid | Duration of hospital stay | AUC LR: 0.970 AUC RF: 0.920 |
Chen57 | RESNET-X | 60 covid | Progression | A: 95% |
Barstugan56 | (GLCM, LDP, GLRLM, GLSZM, DWT) SVM |
53 covid | Recognition of sites of infection | A: 99.8% |
Tang59 | VB-Net/RF | 176 covid | Severity | A: 87.5% PPV: 93.3% NPV: 74.5% |
Colombi60 | SOFWARE 3D Slicer | 236 covid | Ventilation ICU/death |
AUC: 0.86 |
Chaganti63 | Dense U-Net | 100 covid | Severity | PEARSON: 0.90–0.92 |
Using an automated DL-based tool for quantification, Huang53 assessed changes in lung opacification percentages, comparing baseline vs. follow-up CT images to monitor progression. Assessing patients ranging from mild to critical, significantly different percentages of opacification were found between clinical groups at baseline, with generally significant increases in opacification percentages between baseline and follow-up CT images. The results were reviewed by two radiologists, and the good correlation with the system and between them (kappa coefficient 0.75) could potentially eliminate subjectivity in the assessment of pulmonary findings in COVID-19.
Qi54 developed models to predict the length of hospital stays in patients with pneumonia based on comparison of short- (≤10 days) and long-term hospital stays (>10 days). These models were effective in predicting the length of hospital stays with an AUC between 0.97 and 0.92, sensitivity 0.89–1.0 and specificity of 0.75 and 1.0, depending on the method used.
Other quantification systems, in addition to providing excellent accuracy, improve the ability to distinguish between a variety of COVID-19 symptoms, can analyse a large number of CT scans and measure progression throughout the follow-up.55, 56, 57, 58 It is known that manual assessment of severity can lead to delayed treatment planning, and that visual quantification of disease extent on CT correlates with clinical severity.52 Tang,59 thus, proposed a model to assess the severity of COVID-19 (non-severe or severe). A total of 63 quantitative features were used to calculate the volume of ground-glass opacity (GGO) regions and their relationship to total lung volume, a ratio found to correlate closely with severity. Using a similar principle, Colombi60 sought to quantify the well-aerated lung at baseline chest CT, looking for predictors of ICU admission or death. Quantitative CT analysis of well-aerated lung performed visually (%V-WAL) and using open-source software (%S-WAL) found that patients who were admitted to the ICU or who died showed involvement of 4 or more lobes vs. patients without ICU admission or death (16% vs. 6% of patients, p < 0.04). After adjustment for patient demographics and clinical parameters, a well-aerated lung parenchyma at baseline chest CT < 73% was associated with the greatest probability of ICU admission or death (OR 5.4, p < .001). Software methods for quantification showed similar results, supporting the robustness of the results. Although visual assessment of well-aerated lung showed good interobserver agreement (ICC 0.85) in a research setting, automated software measurement of WAL could, in theory, offer greater reliability in clinical practice. Quantification of well-aerated lung is known to be useful to estimate alveolar recruitment during ventilation or to predict the prognosis of patients with acute respiratory distress syndrome (ARDS). It is also known that well-aerated lung regions can act as a substitute for functional residual capacity.61, 62
Shan24 automatically quantified regions of interest (ROIs) and their volumetric ratios with respect to the lung, providing quantitative assessments of progression disease by visualising the distribution of the lesion and predicting severity based on the percentage of infection (POI). The performance of the system was evaluated by comparing automatically segmented infection regions vs. manually-delineated ones, showing a dramatic reduction in total segmentation time from 4–5 h to 4 min with excellent agreement (Dice Similarity Coefficient (DSC) = 2VP/(FP + 2VP + FN) = 0.916 agreement) and a mean estimation error of POI of 0.3% for the whole lung. Chaganti63 quantified abnormal areas by generating two measures of severity: the general proportion of the disease in relation to lung volume and the degree of involvement of each lung lobe. Since high opacities such as consolidation were correlated with more severe symptoms, the measurements quantified both the extent of disease and the presence of consolidation, providing valuable information for prioritising risk and predicting prognosis, as well as, response to treatment. Pearson's correlation coefficient for method prediction was 0.90–0.92 and there were virtually no false positives. In addition, automated processing time was 10 s per case compared to 30 min required for manual annotations. Other papers focused on response to treatment based on imaging findings over time, monitoring of changes during follow-up and the possibility of recovery from disease.
Conclusions
AI, applied to the interpretation of radiological images, allows to streamline and improve diagnosis while optimising the workflow of radiologists. Despite its low sensitivity compared to CT, efforts to improve the diagnostic yield of CXR are of the utmost interest, since it is the most common and widely used imaging method. Its use allows to monitor disease progression, provides an objective assessment based on quantitative information, reduces subjectivity and variability and allow the optimisation of resources due to its potential ability to predict the length of hospital stays. Used as support in clinical practice and, in conjunction with other diagnostic techniques, it could help increase efficiency in the management of the COVID-19 infection.
Conflict of interests
The authors declare that they have no conflict of interest.
References
- 1.Bai Y., Yao L., Wei T., Tian F., Jin D.-Y., Chen L., et al. Presumed asymptomatic carrier transmission of COVID-19. JAMA. 2020;323:1406. doi: 10.1001/jama.2020.2565. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.WHO Director-General's opening remarks at the media briefing on COVID-19-11 March 2020. Ginebra: World Health Organization; 11/03/20202. Available from: https://www.who.int/dg/speeches/detail/who-director-general-s-opening-remarks-at-the-media-briefing-on-covid-19---11-march-2020.
- 3.Tang Y.-W., Schmitz J.E., Persing D.H., Stratton C.W. The laboratory diagnosis of COVID-19 infection: current issues and challenges. J Clin Microbiol. 2020 doi: 10.1128/JCM.00512-20. JCM.00512-20, jcm; JCM.00512-20v1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Ai T., Yang Z., Hou H., Zhan C., Chen C., Lv W., et al. Correlation of chest CT and RT-PCR testing in coronavirus disease 2019 (COVID-19) in China: a report of 1014 cases. Radiology. 2020:200642. doi: 10.1148/radiol.2020200642. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Winichakoon P., Chaiwarith R., Liwsrisakun C., Salee P., Goonna A., Limsukon A., et al. Negative nasopharyngeal and oropharyngeal swabs do not rule out COVID-19. J Clin Microbiol. 2020;58 doi: 10.1128/jcm/58/5/JCM.00297-20.atom. e00297–20. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Li R., Pei S., Chen B., Song Y., Zhang T., Yang W., et al. Substantial undocumented infection facilitates the rapid dissemination of novel coronavirus (SARS-CoV2) Science. 2020 doi: 10.1126/science.abb3221. eabb3221. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Mizumoto K., Kagaya K., Zarebski A., Chowell G. Estimating the asymptomatic proportion of coronavirus disease 2019 (COVID-19) cases on board the Diamond Princess Cruise ship, Yokohama, Japan, 2020. Euro Surveill Bull Eur Sur Mal Transm Eur Commun Dis Bull. 2020;25 doi: 10.2807/1560-7917.ES.2020.25.10.2000180. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Nishiura H., Kobayashi T., Suzuki A., Jung S.-M., Hayashi K., Kinoshita R., et al. Estimation of the asymptomatic ratio of novel coronavirus infections (COVID-19) Int J Infect Dis. 2020 doi: 10.1016/j.ijid.2020.03.020. S1201971220301399. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Kanne J.P., Chest C.T. Findings in 2019 novel coronavirus (2019-nCoV) infections from Wuhan, China: key points for the radiologist. Radiology. 2020;295:16–17. doi: 10.1148/radiol.2020200241. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Bernheim A., Mei X., Huang M., Yang Y., Fayad Z.A., Zhang N., et al. Chest C.T. Findings in coronavirus disease-19 (COVID-19): relationship to duration of infection. Radiology. 2020;295:200463. doi: 10.1148/radiol.2020200463. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Xie X., Zhong Z., Zhao W., Zheng C., Wang F., Liu J. Chest CT for typical 2019-nCoV pneumonia: relationship to negative RT-PCR testing. Radiology. 2020:200343. doi: 10.1148/radiol.2020200343. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Narin A., Kaya C., Pamuk Z. 2020. Automatic Detection of Coronavirus Disease (COVID-19) Using X-ray Images and Deep Convolutional Neural Networks. ArXiv200310849 Cs Eess. Available from: http://arxiv.org/abs/2003.10849 [cited 22.05.20] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Apostolopoulos I.D., Bessiana T. Covid-19: automatic detection from X-ray images utilizing transfer learning with convolutional neural networks. Phys Eng Sci Med. 2020 doi: 10.1007/s13246-020-00865-4. Available from: http://arxiv.org/abs/2003.11617 [cited 22.05.20] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Wang L., Wong A., COVID-Net: A tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images. ArXiv200309871 Cs Eess. 2020 doi: 10.1038/s41598-020-76550-z. Available from: http://arxiv.org/abs/2003.09871 [cited 22.05.20] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Mossa-Basha M., Meltzer C.C., Kim D.C., Tuite M.J., Kolli K.P., Tan B.S. Radiology department preparedness for COVID-19: radiology scientific expert panel. Radiology. 2020 doi: 10.1148/radiol.2020200988. 200988. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Kooraki S., Hosseiny M., Myers L., Gholamrezanezhad A. Coronavirus (COVID-19) outbreak: what the department of radiology should know. J Am Coll Radiol. 2020;17:447–451. doi: 10.1016/j.jacr.2020.02.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Caobelli F. Artificial intelligence in medical imaging: game over for radiologists? Eur J Radiol. 2020;126:108940. doi: 10.1016/j.ejrad.2020.108940. [DOI] [PubMed] [Google Scholar]
- 18.Bullock J., Luccioni A., Pham K.H., Lam C.S.N., Luengo-Oroz M. Mapping the landscape of artificial intelligence applications against COVID-19. ArXiv200311336 Cs. 2020 Available from: http://arxiv.org/abs/2003.11336 [cited 22.05.20] [Google Scholar]
- 19.LeCun Y., Bengio Y., Hinton G. Deep learning. Nature. 2015;521:436. doi: 10.1038/nature14539. [DOI] [PubMed] [Google Scholar]
- 20.Krizhevsky A., Sutskever I., Hinton G.E. ImageNet classification with deep convolutional neural networks. Commun ACM. 2017;60:84–90. [Google Scholar]
- 21.Lecun Y., Bottou L., Bengio Y., Haffner P. Gradient-based learning applied to document recognition. Proc IEEE. 1998;86:2278–2324. [Google Scholar]
- 22.He K., Zhang X., Ren S., Sun J. Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Las Vegas, NV, USA: IEEE; 2016. pp. 770–778. p. 770–8. Available from: http://ieeexplore.ieee.org/document/7780459/ [cited 30.06.19] [Google Scholar]
- 23.Nam J.G., Park S., Hwang E.J., Lee J.H., Jin K.-N., Lim K.Y., et al. Development and validation of deep learning-based automatic detection algorithm for malignant pulmonary nodules on chest radiographs. Radiology. 2019;290:218–228. doi: 10.1148/radiol.2018180237. [DOI] [PubMed] [Google Scholar]
- 24.Shan F., Gao Y., Wang J., Shi W., Shi N., Han M., et al. Lung infection quantification of COVID-19 in CT images with deep learning. ArXiv200304655 Cs Eess Q-Bio. 2020 Available from: http://arxiv.org/abs/2003.04655 [cited 19.05.20] [Google Scholar]
- 25.Weinstock Michael B., Ana E., Russell Dabr J.W., Ari L., Miller Jordan A., Cohen David J., et al. Chest X-ray findings in 636 ambulatory patients with COVID-19 presenting to an urgent care center: a normal chest X-ray is no guarantee. J Urgent Care Med. 2020:13–18. [Google Scholar]
- 26.Wong H.Y.F., Lam H.Y.S., Fong A.H.-T., Leung S.T., Chin T.W.-Y., Lo C.S.Y., et al. Frequency and distribution of chest radiographic findings in COVID-19 positive patients. Radiology. 2019:201160. doi: 10.1148/radiol.2020201160. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Zhang J., Xie Y., Li Y., Shen C., Xia Y. 2020. COVID-19 screening on chest X-ray images using deep learning based anomaly detection. ArXiv200312338 Cs Eess. Available from: http://arxiv.org/abs/2003.12338 [cited 26.04.20] [Google Scholar]
- 28.Ozturk T., Talo M., Yildirim E.A., Baloglu U.B., Yildirim O., Rajendra Acharya U. Automated detection of COVID-19 cases using deep neural networks with X-ray images. Comput Biol Med. 2020;121:103792. doi: 10.1016/j.compbiomed.2020.103792. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Hurt B., Kligerman S., Hsiao A. Deep learning localization of pneumonia: 2019 coronavirus (COVID-19) outbreak. J Thorac Imaging. 2020;35:W87–W89. doi: 10.1097/RTI.0000000000000512. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Zech J.R., Badgeley M.A., Liu M., Costa A.B., Titano J.J., Oermann E.K. Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: a cross-sectional study. PLoS Med. 2018;15 doi: 10.1371/journal.pmed.1002683. e1002683. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Minaee S., Kafieh R., Sonka M., Yazdani S., Soufi G.J., Deep-COVID: 2020. Predicting COVID-19 from chest X-ray images using deep transfer learning. ArXiv200409363 Cs. Available from: http://arxiv.org/abs/2004.09363[cited 22.05.20] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Hemdan E.E.-D., Shouman M.A., Karar M.E. COVIDX-Net: a framework of deep learning classifiers to diagnose COVID-19 in X-ray images. ArXiv200311055 Cs Eess. 2020 Available from: http://arxiv.org/abs/2003.11055 [cited 22.05.20] [Google Scholar]
- 33.Sethy P.K., Behera S.K. Detection of Coronavirus Disease (COVID-19) based on deep features. Engineering. 2020 Available from: https://www.preprints.org/manuscript/202003.0300/v1 [cited 20.05.20] [Google Scholar]
- 34.Murphy K., Smits H., Knoops A.J.G., Korst M.B.J.M., Samson T., Scholten E.T., et al. COVID-19 on the chest radiograph: a multi-reader evaluation of an AI system. Radiology. 2020:201874. doi: 10.1148/radiol.2020201874. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Ghoshal B., Tucker A. Estimating uncertainty and interpretability in deep learning for coronavirus (COVID-19) detection. ArXiv200310769 Cs Eess Stat. 2020 Available from: http://arxiv.org/abs/2003.10769 [cited 22.05.20] [Google Scholar]
- 36.Jin S., Wang B., Xu H., Luo C., Wei L., Zhao W., et al. AI-assisted C.T. imaging analysis for COVID-19 screening: Building and deploying a medical AI system in four weeks. Health Inform. 2020 doi: 10.1016/j.asoc.2020.106897. Available from: http://medrxiv.org/lookup/doi/10.1101/2020.03.19.20039354 [cited 18.05.20] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Jin C., Chen W., Cao Y., Xu Z., Zhang X., Deng L., et al. Development and evaluation of an AI system for COVID-19 diagnosis. Radiol Imaging. 2020 doi: 10.1038/s41467-020-18685-1. Available from: http://medrxiv.org/lookup/doi/10.1101/2020.03.20.20039834 [cited 18.05.20] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Chen J., Wu L., Zhang J., Zhang L., Gong D., Zhao Y., et al. Deep learning-based model for detecting 2019 novel coronavirus pneumonia on high-resolution computed tomography: a prospective study. Infectious Diseases (except HIV/AIDS) 2020 doi: 10.1038/s41598-020-76282-0. Available from: http://medrxiv.org/lookup/doi/10.1101/2020.02.25.20021568 [cited 18.05.20] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Zheng C., Deng X., Fu Q., Zhou Q., Feng J., Ma H., et al. Deep learning-based detection for COVID-19 from chest CT using weak label. Infectious Diseases (except HIV/AIDS) 2020 Available from: http://medrxiv.org/lookup/doi/10.1101/2020.03.12.20027185 [cited 25.05.0] [Google Scholar]
- 40.Bai H.X., Wang R., Xiong Z., Hsieh B., Chang K., Halsey K., et al. AI augmentation of radiologist performance in distinguishing COVID-19 from pneumonia of other etiology on chest CT. Radiology. 2020:201491. doi: 10.1148/radiol.2020201491. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Wang S., Kang B., Ma J., Zeng X., Xiao M., Guo J., et al. A deep learning algorithm using CT images to screen for Corona Virus Disease (COVID-19) Infectious Diseases (except HIV/AIDS) 2020 doi: 10.1007/s00330-021-07715-1. Available from: http://medrxiv.org/lookup/doi/10.1101/2020.02.14.20023028 [cited 18.05.20] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Li L., Qin L., Xu Z., Yin Y., Wang X., Kong B., et al. Artificial intelligence distinguishes COVID-19 from community acquired pneumonia on chest CT. Radiology. 2020 doi: 10.1148/radiol.2020200905. 200905. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Song Y., Zheng S., Li L., Zhang X., Zhang X., Huang Z., et al. Deep learning enables accurate diagnosis of novel coronavirus (COVID-19) with CT images. Radiol Imaging. 2020 doi: 10.1109/TCBB.2021.3065361. Available from: http://medrxiv.org/lookup/doi/10.1101/2020.02.23.20026930 [cited 19.05.20] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Xu X., Jiang X., Ma C., Du P., Li X., Lv S., et al. Deep learning system to screen coronavirus disease 2019 pneumonia. ArXiv200209334 Phys. 2020 doi: 10.1016/j.eng.2020.04.010. Available from: http://arxiv.org/abs/2002.09334 [cited 19.05.20] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Singh D., Kumar V., Vaishali null, Kaur M. Classification of COVID-19 patients from chest CT images using multi-objective differential evolution-based convolutional neural networks. Eur J Clin Microbiol Infect Dis Off Publ Eur Soc Clin Microbiol. 2020:1–11. doi: 10.1007/s10096-020-03901-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Li D., Wang D., Dong J., Wang N., Huang H., Xu H., et al. False-negative results of real-time reverse-transcriptase polymerase chain reaction for severe acute respiratory syndrome coronavirus 2: role of deep-learning-based ct diagnosis and insights from two cases. Korean J Radiol. 2020;21:505. doi: 10.3348/kjr.2020.0146. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47.Shi F, Xia L, Shan F, Wu D, Wei Y, Yuan H, et al. Large-Scale Screening of COVID-19 from Community Acquired Pneumonia using Infection Size-Aware Classification. ArXiv200309860 Cs Eess. Marc 22, 2020. Available from: http://arxiv.org/abs/2003.09860 [cited 26.04.20].
- 48.Li K., Fang Y., Li W., Pan C., Qin P., Zhong Y., et al. CT image visual quantitative evaluation and clinical classification of coronavirus disease (COVID-19) Eur Radiol. 2020 doi: 10.1007/s00330-020-06817-6. Available from: http://link.springer.com/10.1007/s00330-020-06817-6 [cited 6.06.20] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Chung M., Bernheim A., Mei X., Zhang N., Huang M., Zeng X., et al. CT imaging features of 2019 novel coronavirus (2019-nCoV) Radiology. 2020;295:202–207. doi: 10.1148/radiol.2020200230. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Pan F., Ye T., Sun P., Gui S., Liang B., Li L., et al. Time course of lung changes on chest CT during recovery from 2019 novel coronavirus (COVID-19) pneumonia. Radiology. 2020:200370. doi: 10.1148/radiol.2020200370. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Zhao W., Zhong Z., Xie X., Yu Q., Liu J. Relation between chest CT findings and clinical conditions of coronavirus disease (COVID-19) pneumonia: a multicenter study. Am J Roentgenol. 2020;214:1072–1077. doi: 10.2214/AJR.20.22976. [DOI] [PubMed] [Google Scholar]
- 52.Yang R., Li X., Liu H., Zhen Y., Zhang X., Xiong Q., et al. Chest CT severity score: an imaging tool for assessing severe COVID-19. Radiol Cardiothorac Imaging. 2020;2:e200047. doi: 10.1148/ryct.2020200047. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53.Huang L., Han R., Ai T., Yu P., Kang H., Tao Q., et al. Serial quantitative chest CT assessment of COVID-19: deep-learning approach. Radiol Cardiothorac Imaging. 2020;2:e200075. doi: 10.1148/ryct.2020200075. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.Qi X., Lei J., Yu Q., Xi Y., Wang Y., Ju S. CT imaging of coronavirus disease 2019 (COVID-19): from the qualitative to quantitative. Ann Transl Med. 2020;8 doi: 10.21037/atm.2020.02.91. 256. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55.Gozes O, Frid-Adar M, Greenspan H, Browning PD, Zhang H, Ji W, et al. Rapid AI Development Cycle for the Coronavirus (COVID-19) Pandemic: Initial Results for Automated Detection & Patient Monitoring using Deep Learning CT Image Analysis. ArXiv200305037 Cs Eess. March 24, 2020. Available from: http://arxiv.org/abs/2003.05037 [cited April, 2020].
- 56.Barstugan M., Ozkaya U., Ozturk S. Coronavirus (COVID-19) classification using CT images by machine learning methods. ArXiv200309424 Cs Eess Stat. 2020 Available from: http://arxiv.org/abs/2003.09424 [cited 19.05.20] [Google Scholar]
- 57.Chen X., Yao L., Zhang Y. Residual attention U-Net for automated multi-class segmentation of COVID-19 chest CT images. ArXiv200405645 Cs Eess Q-Bio. 2020 Available from: http://arxiv.org/abs/2004.05645 [cited 19.05.20] [Google Scholar]
- 58.Du S., Gao S., Huang G., Li S., Chong W., Jia Z., et al. 2020. CT features and artificial intelligence quantitative analysis of recovered COVID-19 patients with negative RT-PCR and clinical symptoms. Available from: https://www.researchsquare.com/article/rs-21021/v1 [cited 11.05.20] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 59.Tang Z., Zhao W., Xie X., Zhong Z., Shi F., Liu J., et al. Severity assessment of coronavirus disease 2019 (COVID-19) using quantitative features from chest CT images. ArXiv200311988 Cs Eess. 2020 Available from: http://arxiv.org/abs/2003.11988 [cited 19.05.20] [Google Scholar]
- 60.Colombi D., Bodini F.C., Petrini M., Maffi G., Morelli N., Milanese G., et al. Well-aerated lung on admitting chest CT to predict adverse outcome in COVID-19 pneumonia. Radiology. 2020:201433. doi: 10.1148/radiol.2020201433. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 61.Nishiyama A., Kawata N., Yokota H., Sugiura T., Matsumura Y., Higashide T., et al. A predictive factor for patients with acute respiratory distress syndrome: CT lung volumetry of the well-aerated region as an automated method. Eur J Radiol. 2020;122:108748. doi: 10.1016/j.ejrad.2019.108748. [DOI] [PubMed] [Google Scholar]
- 62.Gattinoni L., Caironi P., Cressoni M., Chiumello D., Ranieri V.M., Quintel M., et al. Lung recruitment in patients with the acute respiratory distress syndrome. N Engl J Med. 2006;354:1775–1786. doi: 10.1056/NEJMoa052052. [DOI] [PubMed] [Google Scholar]
- 63.Chaganti S., Balachandran A., Chabin G., Cohen S., Flohr T., Georgescu B., et al. 2020. Quantification of tomographic patterns associated with COVID-19 from chest CT. ArXiv200401279 Cs Eess. Available from: http://arxiv.org/abs/2004.01279 [cited 7.06.20] [DOI] [PMC free article] [PubMed] [Google Scholar]