Skip to main content
Diagnostic and Interventional Radiology logoLink to Diagnostic and Interventional Radiology
. 2020 Aug 12;27(1):20–27. doi: 10.5152/dir.2020.20205

Determination of disease severity in COVID-19 patients using deep learning in chest X-ray images

Maxime Blain 1,*, Michael T Kassin 1,*, Nicole Varble 1,*, Xiaosong Wang 1, Ziyue Xu 1, Daguang Xu 1, Gianpaolo Carrafiello 1, Valentina Vespro 1, Elvira Stellato 1, Anna Maria Ierardi 1, Letizia Di Meglio 1, Robert D Suh 1, Stephanie A Walker 1, Sheng Xu 1, Thomas H Sanford 1, Evrim B Turkbey 1, Stephanie Harmon 1, Baris Turkbey 1, Bradford J Wood 1
PMCID: PMC7837735  PMID: 32815519

Abstract

PURPOSE

Chest X-ray plays a key role in diagnosis and management of COVID-19 patients and imaging features associated with clinical elements may assist with the development or validation of automated image analysis tools. We aimed to identify associations between clinical and radiographic features as well as to assess the feasibility of deep learning applied to chest X-rays in the setting of an acute COVID-19 outbreak.

METHODS

A retrospective study of X-rays, clinical, and laboratory data was performed from 48 SARS-CoV-2 RT-PCR positive patients (age 60±17 years, 15 women) between February 22 and March 6, 2020 from a tertiary care hospital in Milan, Italy. Sixty-five chest X-rays were reviewed by two radiologists for alveolar and interstitial opacities and classified by severity on a scale from 0 to 3. Clinical factors (age, symptoms, comorbidities) were investigated for association with opacity severity and also with placement of central line or endotracheal tube. Deep learning models were then trained for two tasks: lung segmentation and opacity detection. Imaging characteristics were compared to clinical datapoints using the unpaired student’s t-test or Mann-Whitney U test. Cohen’s kappa analysis was used to evaluate the concordance of deep learning to conventional radiologist interpretation.

RESULTS

Fifty-six percent of patients presented with alveolar opacities, 73% had interstitial opacities, and 23% had normal X-rays. The presence of alveolar or interstitial opacities was statistically correlated with age (p = 0.008) and comorbidities (p = 0.005). The extent of alveolar or interstitial opacities on baseline X-ray was significantly associated with the presence of endotracheal tube (p = 0.0008 and p = 0.049) or central line (p = 0.003 and p = 0.007). In comparison to human interpretation, the deep learning model achieved a kappa concordance of 0.51 for alveolar opacities and 0.71 for interstitial opacities.

CONCLUSION

Chest X-ray analysis in an acute COVID-19 outbreak showed that the severity of opacities was associated with advanced age, comorbidities, as well as acuity of care. Artificial intelligence tools based upon deep learning of COVID-19 chest X-rays are feasible in the acute outbreak setting.


The imaging features of novel coronavirus (SARS-CoV-2) and coronavirus disease 2019 (COVID-19) pandemic are still being fully characterized and understood (1, 2). The estimated mortality rate is reported between 1.4% and 7% (3). Health services and intensive care units (ICUs) are facing critical saturation in this pandemic (4), where early and wise resource allocation decisions may impact population outcomes.

Radiology departments play a key role in this pandemic (58), with imaging data potentially contributing towards detection (914), characterization (9), monitoring (1518), triage (1922), resource allocation, early intervention, and isolation (8). Although speculative, models that correlate imaging findings to outcomes could be helpful or predictive in the management and triage of the 20% of SARS-CoV-2 positive patients who develop more serious manifestations of COVID-19 pneumonia. Epidemiology standards require a waiting period in between patients with airborne viral diseases, which may practically limit computed tomography (CT) use. To date, radiology and thoracic professional societies have pointed to the efficiency, ease of access, field availability, and repeatability of chest X-ray as well as its ease of cleaning and decontamination. These strengths are balanced against the higher sensitivity and specificity of CT. Moreover, when patients are encouraged to present early in the course of their disease, as was the case in Hubei Province, China, chest X-ray may have less value than CT.

Typical and characteristic CT features for COVID-19 related pneumonia have been recently defined (2328). Chest X-ray findings might help address clinical decision-making in screening, management and prioritization that may unfortunately arise in the care of COVID-19 patients. Resource allocation may be most critical during peak prevalence, when imaging equipment may also be stretched thin, or not accessible to intensive care or “medical surge facility” settings.

Deep learning uses convolutional neural networks that are like a “black box” in that they may or may not use conventional imaging features to function and classify the outputs. Machine learning on the other hand, would use specific features, and generally requires less data points to ensure clinically relevant accuracy or validity. This study uses tools for explanatory purposes, not for producing a refined or usable model at this early stage. There are currently limited reports of the role of chest X-ray in COVID-19 patients with scarce details of the application and role of deep learning of chest X-rays in patients with COVID-19. Previous papers stated that the main findings were bilateral reticular nodular opacities, ground-glass opacities and peripheral consolidations (2931). Deep learning and artificial intelligence (AI) applications in chest radiography are in their infancy, but there are multiple commercial platforms for computer-aided detection for pulmonary nodule detection, characterization and quantification of interstitial lung disease (3234). We aimed to identify associations between clinical and radiographic features as well as to assess the feasibility of deep learning applied to chest X-rays in the setting of an acute COVID-19 outbreak.

Methods

A total of 48 patients (60±17 years, 15 women) with 65 X-rays, collected during the initial hit phase of the outbreak in Italy, were analyzed. Research Ethics Committee / Institutional Review Board approval was obtained for this study with an exemption of requirement for written informed consent since the protocol met waiver of consent criteria. Chest X-rays, limited clinical and laboratory data of 48 Italian patients with positive RT-PCR for SARS-CoV-2 from February 22 to March 6, 2020 were retrospectively analyzed. Nine patients had multiple X-rays. The average time delay between RT-PCR and imaging was 0.54 day with a maximum of 10 days. Clinical data and characteristics included date of X-rays, gender, symptoms (fever, cough, dyspnea, and other), associated comorbidity diagnosis, smoking history, date of real-time RT-PCR testing, RT-PCR titer, platelets, absolute white cell count with differentials including neutrophils, lymphocytes, monocytes, eosinophils, and basophils.

Image characterization

The X-rays were assessed by two radiologists of five and seven years of experience, who analyzed the presence of alveolar opacity and interstitial opacity. Alveolar pattern was noted when there was a fluffy ill-defined opacity with rounded shapes. The interstitial pattern was noted for linear opacities coming from the hilum and extended through the lung parenchyma, parallel to vessels. The severity of the extent of both alveolar and interstitial opacity was subjectively graded on a scale from 0 to 3 (13, 15). Specifically, the alveolar opacity extent severity was 0 for 0% of lung space, 1 for 1%–25% of lung space, 2 for 26%–50%, and 3 for >50% of lung space (35). The interstitial opacity extent severity was 0 for no reticular opacities, 1 for reticular opacities immediately adjacent to the hilum, 2 for reticular opacities from the hilum and extended through half of the lung parenchyma, and 3 for reticular opacities that extended from hilum to chest wall. Further, alveolar opacities were characterized as lobar or multi-lobar based on presence in one lobe or more than one lobe, respectively. The apical-basal distribution of the overall extent of opacity was characterized as being present in the apex of the lung, basal portion of the lung, diffusely involving the entire lung, or none. The laterality of overall opacity was characterized as bilateral, right, left, or none. The X-rays were evaluated for mediastinal silhouette enlargement as defined as greater than 0.5 of the cardiothoracic index (regardless of portability, PA, or AP orientation). The X-rays were evaluated for pleural effusion as well as presence of endotracheal tube (ETT) or central line. The two radiologists independently characterized and classified the X-rays, identified mismatch characterizations and came to consensus agreement based on definitions above. Patients with more than one X-ray were evaluated for evolution of imaging findings.

Deep learning modeling

Our deep learning AI model had two different tasks. A lung segmentation model with a modified U-Net (with EfficientNet as backbone) (36) architecture which was trained using public datasets, e.g., JSRT (37), MontgomeryCXR and ShenzhenCXR (38), including a total of 1048 images. The second task consisted of image classification, to predict whether or not alveolar and interstitial opacities existed. This used a multitask multiclass classification framework based on an ImageNet-pretrained Densenet121 (39) model that was finetuned and evaluated using the 65 images in a 5-fold cross-validation fashion. The classification was applied only for the segmented lung regions of the image based on the mask from the first task. Saliency heatmaps for both interstitial and alveolar opacities were generated for all 65 X-rays (130 heatmaps total) to visualize geographic regions of importance that aided in the classification tasks of presence of interstitial or alveolar opacity (40). Both models were trained using Pytorch on a NVIDIA Titan Xp.

Statistical analysis

Clinical and imaging characteristics were analyzed for descriptive statistics using averages and standard deviations. Subsequently, imaging characteristics were compared to clinical datapoints with statistical analysis of the comparison using the unpaired student’s t-test for parametric data and a Mann-Whitney U test for nonparametric data. To evaluate the performance of the deep learning model, Cohen’s kappa (κ) analysis was done to compare the concordance of the model to the radiologists’ consensus diagnosis. Accuracy was defined as the overall agreement of the radiologist consensus diagnosis to the models. A p value of <0.05 was considered significant and results were reported as mean ± standard deviation (SD). Statistical analyses were done in R (version 3.6.3).

Results

Table 1 outlines demographic and clinical characteristics. The main clinical finding was fever in 35 cases (73%), followed by cough in 22 (46%), and dyspnea in 7 (15%). Among other symptoms, the most common was pharyngeal pain (6 cases; 13%). Thirteen patients had chest X-rays but did not have complete clinical presentation information, as they were transferred from other hospitals.

Table 1.

Demographic and clinical characteristics of SARS-CoV-2 positive patients

n (%)
Age (years), mean (min–max) 60.2 (27–92)

Gender F 15 (31)
M 33 (69)

Fever Yes 35 (73)
No 4 (8)
Not reported 9 (19)

Cough Yes 22 (46)
No 17 (35)
Not reported 9 (19)

Dyspnea Yes 7 (15)
No 28 (58)
Not reported 9 (19)

Other symptoms Yes 15 (31)
No 24 (50)
Not reported 9 (19)

Comorbidity Yes 27 (56)
No 12 (25)
Not reported 9 (19)

Twenty-seven patients had multiple comorbidities (56%). The main comorbidities were hypertension (n=12; 25%), diabetes (n=7; 15%) and obesity (n=6; 13%). The majority of the patients (n=28; 59%) had multiple comorbidities. Other potentially relevant comorbidities included cardiopathy (n=3; 6%), chronic obstructive pulmonary disease (n=2; 4%), congestive heart failure (n=2; 4%), ischemic heart disease (n=1; 2%), and acute myeloid leukemia (n=1; 2%).

All 48 patients had a positive RT-PCR for SARS-CoV-2, with 13 patients receiving initial diagnostic testing and RT-PCR at outside facilities, with clinical presenting data not available for analysis. The 35 patients with laboratory counts had a mean PCR result of 5.83 mg/dL (reference range <0.5 mg/dL; SD, 8.29 mg/dL). The mean blood cell counts were as follows: neutrophil 4.09 ×109/L (reference range, 1.50–6.50 ×109/L; SD, 2.86×109/L), monocyte 0.53 ×109/L (reference range, 0.30–0.60 ×109/L; SD, 0.27×109/L), basophil 0.02 ×109/L (reference range, 0.01–0.20 ×109/L; SD, 0.02×109/L), platelet 176.09 ×109/L (reference range, 130–400 ×109/L; SD, 47.55×109/L); lymphocyte and eosinophil counts were low, with a mean of 1.09 ×109/L (reference range, 1.20–3.40 ×109/L; SD, 0.50×109/L) for lymphocyte and a mean 0.01 ×109/L (reference range, 0.10–0.80 ×109/L; SD, 0.03×109/L) for eosinophil.

Eleven of initial X-rays were without infiltrates (23%). Twenty-seven patients (56%) had alveolar opacities on initial X-ray, with 8 (17%) having alveolar opacities involving more than 50% of the lung (Table 2). Thirty-five patients (73%) had interstitial opacities, with most of those being adjacent to the hilum. Thirty-three (69%) of overall opacities, including both alveolar and interstitial opacities, were bilateral. A basal predominance of overall opacity distribution was noted in 22 X-rays (46%), while 15 patients (31%) had diffuse involvement of both apex and base. Six (13%) and 9 (19%) of the initial X-rays had an endotracheal tube or central line, respectively.

Table 2.

Analysis of baseline chest X-rays of SARS-CoV-2 positive patients

Imaging characteristics n (%)*
Alveolar opacity 27 (56)

Severity 0 21 (44)
1 13 (27)
2 6 (13)
3 8 (17)

Distribution Multi-lobar 20 (42)
Lobar 7 (15)
None 21 (44)

Interstitial opacity 35 (73)

Severity 0 13 (27)
1 20 (42)
2 11 (23)
3 4 (8)

Apical-basal predominance Apical 0 (0)
Basal 22 (46)
Diffuse 15 (31)
None 11 (23)

Laterality Right 2 (4)
Left 2 (4)
Bilateral 33 (69)
None 11 (23)

Heart size enlarged (cardiothoracic index >1/2) 4 (8)

Pleural effusion 3 (6)

Endotracheal tube 6 (13)

Central line 9 (19)
*

Percentage based on 48 patients.

The presence of either alveolar or interstitial opacities (i.e., presence of infiltrate of any type) on chest X-ray was statistically correlated with patient age (presence of opacity, 46±18 years; no opacity, 64±14 years, p = 0.008) and the number of comorbidities (presence of opacity, 0.5±0.9; no opacity, 1.6±1.2; p = 0.005) (Fig. 1). The number of primary symptoms (fever, cough, and/or dyspnea) did not correlate with presence of opacities (presence of opacity, 1.8±0.6, no opacity, 1.7±0.8; p = 0.596). There were trends between opacity extent and age, as well as between opacity extent and comorbidity.

Figure 1. a, b.

Figure 1. a, b

Chest X-ray and clinical data correlation for COVID-19 positive patients. Panel (a) shows comparison of patients with (n=37) and without (n=11) the presence of either alveolar or interstitial opacity. Alveolar or interstitial opacity were statistically correlated to a higher age and higher number of comorbidities. Panel (b) shows the distribution of patients with and without the presence of alveolar or interstitial opacity. This data suggests a trend of increasing alveolar and interstitial opacity with age and comorbidities.

A higher severity of both alveolar or interstitial opacities on baseline chest X-ray was significantly correlated to the presence of endotracheal tube or central line (Fig. 2). Alveolar opacity extent severity was higher for patients with ETT placement (with ETT: median, 2.5 [range, 2–3]; no ETT: median, 0.5 [range, 0–3]; p = 0.0008). Interstitial opacity extent severity was higher for patients with ETT placement (with ETT: median, 2 [range, 1–2]; no ETT: median, 1 [range, 0–3]; p = 0.049). Similarly, average alveolar opacity extent severity was higher for patients when a central line was used (with central line: median, 3 [range, 0–3]; no central line: median, 1 [range, 0–3]; p = 0.003), and interstitial opacity extent severity was higher when a central line was used (with central line: median, 2 [range, 1–2]; no central line: median, 1 [range, 0–3]; p = 0.007). Further, patients with diffuse involvement of both apex and base showed a higher rate of endotracheal tube (n=3, 19% vs. n=3, 13%) or a central line (n=4, 25% vs. n=4, 17%) placement.

Figure 2.

Figure 2

Correlation of endotracheal tube and central line placement with either alveolar or interstitial opacities severity. Comparison of severity of parenchymal opacities on baseline X-ray for patients with (n=6) and without (n=42) endotracheal tube (ETT) or with (n=9) and without (n=39) central line placement. The p values indicated.

In 9 patients with longitudinal chest X-rays, 5 showed evolution of X-ray findings over time, with increasing alveolar or interstitial opacity severity (Fig. 3). Radiographic evolution of opacities was increased in 5 patients over time. In one representative patient, the day 0 X-ray showed interstitial opacity with development of alveolar opacity, which became more severe and diffuse over the 7 day period. The patient received a central line as the opacities progressed (Fig. 3). Deep learning models were not applied to these serial data.

Figure 3. a, b.

Figure 3. a, b

Longitudinal chest X-ray evaluation. Panel (a) shows the change in alveolar opacity severity versus days since first scan. Changes in imaging findings and if a central line was placed are indicated at corresponding timepoints. Panel (b) shows longitudinal chest X-ray series for an 81-year-old female patient, whose disease course is documented in panel (a) (dark green line).

The deep learning workflow consisted of initially building and applying a whole lung field segmentation model based upon a public dataset. Image classification of this small data set was then performed with a multi-task, multi-class classification framework and finally 5-fold cross-validation.

In 4 radiologists’ subjective evaluation, the lung field segmentation using the AI model was reasonably accurate in the delineation of lung field margins and pleural edges for all 65 X-ray images. Whole lung field segmentation was possible without focal infiltrates confounding the task (Fig. 4). For the lung field segmentation, the main limits/errors were the retrocardiac left lower lobe (61/65), the pleural sulci (12/65), predominantly consolidated lung (8/65) and erroneously including small non lung areas in the segmentation (10/65).

Figure 4. a–d.

Figure 4. a–d

Illustrative case of deep learning model for lung segmentation and classification of alveolar and interstitial opacities. Baseline chest X-ray (a) of a patient with both alveolar and interstitial opacities. The total lung field segmentation image (b) of the same patient. Note that retro-cardiac left lower lobe is erroneously neglected from total lung field segmentation, a challenging problem with frontal chest X-ray lung field segmentation. Alveolar opacity heat map (c). Interstitial opacity heat map (d). Note that the map used the retrocardiac region, but did not use the region of the small round ground glass opacity in the mid left lung (arrow).

Following segmentation, the deep learning model achieved an accuracy of diagnosing alveolar opacities with 78.5% concordance (κ=0.51) to the radiologist diagnosis and 90.7% for the interstitial opacities (κ=0.71). A 58-year-old male with typical alveolar and interstitial opacities who presented with fever is shown in Fig. 4. Same figure also illustrates the computed heatmaps consisting of saliency maps for positive case with alveolar and interstitial opacity, respectively. The multi-class classification accuracy was computed using sklearn.metrics.accuracy score.

Discussion

This preliminary analysis correlating chest X-rays from the Northern Italy Pandemic outbreak to demographic and clinical features provides a model for demonstrating deep learning. Opacities of any kind may be classified with deep learning, and may also correlate with certain demographic and clinical features. The specific utility of chest X-ray versus CT has yet to reach consensus in this rapidly evolving pandemic, and remains an area of active investigation.

Advanced age and presence of comorbidities were recapitulated as risk factors in COVID-19 associated with presence of opacities of either type. Endotracheal and central line placement (as surrogates of clinical severity) were associated with increased extent of opacities. Five of the nine patients with multiple X-rays showed progression of extent of opacity. This is not unexpected, but suggests potential utility in obtaining a baseline X-ray and trending the radiographic appearance to correlate with clinical condition. The typical evolution of imaging features over time and upon intubation and ICU admission may be reflected and monitored with serial chest X-rays which can highlight superimposed heart failure or bacterial superinfections, which mandate treatment modifications. Sequential quantification of serial X-ray changes may be one way in the future for deep learning models to provide a standardized, reproducible response or progression metric as a biomarker for therapies and disease, which merits further attention.

This series lays the foundational feasibility for future deep learning and AI chest X-ray segmentation and classification analyses for COVID-19 patients. This deep learning model reasonably predicted the presence of alveolar and interstitial chest X-ray opacities in patients with positive PCR, even with a fairly small source data set. Although entirely speculative, this lays the foundation for the potential use of deep learning models for chest X-ray quantification of disease, decompensation risk, prognosis, additional therapies, contagiousness, outcomes, triage, or even potentially scarce resource allocation. Addressing these critical but speculative questions and defining downstream utility will require larger aggregate data sets with paired outcomes.

This series is limited by the retrospective nature of the small 48 patient data analysis. Further, patients were excluded from some clinical analyses, when associated data points were not available. Patient selection biases certainly exist, as patients were not selected in a prospective nor randomized fashion. Such limitations are understandable, given the acute patient care and time burdens on the medical team in Italy. Lack of chest X-ray orientation (posteroanterior or anteroposterior, upright or supine) potentially interfered with X-ray analysis. An additional limitation is the unrecorded timing of the chest X-rays relative to symptoms, and remains a major gap in our understanding. It is known that 50% of CT scans are positive in the initial 2 days, and the number and site of involvement increases in days 3–5, with a maximum CT infiltrate seen at days 10–13 (19). The exact timing and evolution for chest X-ray features remains ill-defined, although may be expected to be less sensitive than CT. The classification system of semiquantitative X-ray evaluation by two observers was created for the purpose of this study, but is not validated, which is a major limitation of this study. Finally, the lack of a control group is a major shortcoming of this early analysis. And for instance, background chronic lung disease was not specifically controlled, and could certainly alter the results even though we could suspect it be evenly distributed amongst the groups.

Age and comorbidities were associated with increased opacities on chest X-ray. Increased opacities were associated with increased level and acuity of care, manifest as placement of ETTs or central lines. One can imagine the potential impact of deep learning models in the ability to segment lungs and quantify opacities in a standardized and reproducible fashion. The deep learning evaluation was limited by the small sample size and will require further development with larger datasets with ground truth images with annotations. Unfortunately, the time for meaningful action and impactful science is short. Realization of the fullest potential of deep learning and AI to impact the COVID-19 pandemic and address urgent unmet clinical needs will necessitate open multinational team science and open data sharing and aggregation. Deep learning algorithms for CT have displayed the ability to detect COVID-19, differentiate from community-acquired pneumonia, and quantify disease burden (29, 30). Even with limited small training data in COVID-19 positive patients, this reasonably correlative deep learning model for chest X-rays suggests that larger data sets will have larger opportunity for clinical utility.

In conclusion, chest X-ray analysis in an acute COVID-19 outbreak showed that the severity of opacities was associated with advanced age, comorbidities, as well as acuity of care. This small series from early time points of the outbreak in Italy showed that chest X-ray may inform triage and management of COVID-19 patients and AI may add future value, especially in the state of overload induced by such a pandemic. Future work in AI could also try to predict outcomes and prognosis based upon clinical input, which might include imaging. Such models likely will need to be multiparametric where imaging is but one feature of a predictive model. Our study begins to address the urgent need for deep learning models and other data science assessments to unify multiparametric factors and meta-data into a single metric. Future work will determine if and how chest X-ray AI models could play roles at multiple time points during the COVID-19 disease process, including diagnosis, triage, prognosis, and response.

Main points

  • A deep learning model allowed for automated classification of chest X-rays in patients with acute COVID-19 pneumonia.

  • Alveolar and interstitial opacities in chest X-rays from COVID-19 patients correlated with comorbidities and advanced age.

  • The severity of opacities on baseline chest X-ray were significantly correlated to increased acuity of intensive care.

Footnotes

Financial disclosure

This work was supported by the Center for Interventional Oncology and the Intramural Research Program of the National Institutes of Health (NIH) by intramural NIH Grants NIH Z01 1ZID BC011242 and CL040015. This project has been funded in part with federal funds from the National Cancer Institute, National Institutes of Health, under Contract No. 75N91019D00024, Task Order No. 75N91019F00129.

FDA: Discussion of software device without FDA clearance.

The content of this manuscript does not necessarily reflect the views, policies, or opinions of the U.S. Department of Health and Human Services. The mention of commercial products, their source, or their use in connection with material reported herein is not to be construed as an actual or implied endorsement of such products by the United States government. Opinions expressed are those of the authors, not necessarily the NIH.

Conflict of interest disclosure

NV is an employee of Philips Research. DX, ZX, XW are employees of NVIDIA. MB is a recipient of the 2019 Alain Rahmouni SFR-CERF research grant provided by the French Society of Radiology together with the French Academic College of Radiology. BW is Principal Investigator on the following CRADAs (Cooperative Research & Development Agreements) between NIH and related commercial partners: Philips Image Guided Therapy (CRADA), Philips Research (CRADA), Philips (CRADA), Siemens (CRADA), NVIDIA (CRADA). Licensed Patents / Royalties: Philips (NIH and BW receive royalties for licensed patents from Philips).

References

  • 1.Wu Z, McGoogan JM. Characteristics of and important lessons from the coronavirus disease 2019 (COVID-19) outbreak in China: summary of a report of 72,314 cases from the Chinese center for disease control and prevention. JAMA. 2020. Feb 24, [Published Ahead of Print] [DOI] [PubMed]
  • 2.Zhou F, Yu T, Du R, et al. Clinical course and risk factors for mortality of adult inpatients with COVID-19 in Wuhan, China: a retrospective cohort study. Lancet. 2020;395:1054–62. doi: 10.1016/S0140-6736(20)30566-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Baud D, Qi X, Nielsen-Saines K, Musso D, Pomar L, Favre G. Real estimates of mortality following COVID-19 infection. Lancet Infect Dis. 2020;20:773. doi: 10.1016/S1473-3099(20)30195-X. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Wu C, Chen X, Cai Y, et al. Risk Factors Associated With Acute Respiratory Distress Syndrome and Death in Patients With Coronavirus Disease 2019 Pneumonia in Wuhan, China. JAMA Intern Med. 2020;180:934–943. doi: 10.1001/jamainternmed.2020.0994. [Published Ahead of Print] [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Ai T, Yang Z, Hou H, et al. Correlation of chest CT and RT-PCR testing in coronavirus disease 2019 (COVID-19) in China: a report of 1014 cases. Radiology. 2020 Feb 26;:200642. doi: 10.1148/radiol.2020200642. [Published Ahead of Print] [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Hao W, Li M. Clinical diagnostic value of CT imaging in COVID-19 with multiple negative RT-PCR testing. Travel Med Infect Dis. 2020;34:101627. doi: 10.1016/j.tmaid.2020.101627. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Li Y, Xia L. Coronavirus disease 2019 (COVID-19): role of chest CT in diagnosis and management. AJR Am J Roentgenol. 2020;214:1280–1286. doi: 10.2214/AJR.19.22688. [DOI] [PubMed] [Google Scholar]
  • 8.Kooraki S, Hosseiny M, Myers L, Gholamrezanezhad A. Coronavirus (COVID-19) outbreak: what the department of radiology should know. J Am Coll Radiol. 2020;17:447–451. doi: 10.1016/j.jacr.2020.02.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Bai HX, Hsieh B, Xiong Z, et al. Performance of radiologists in differentiating COVID-19 from viral pneumonia on chest CT. Radiology. 2020 Mar 10;:200823. doi: 10.1148/radiol.2020200823. [Published Ahead of Print] [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Fang Y, Zhang H, Xie J, et al. Sensitivity of chest CT for COVID-19: comparison to RT-PCR. Radiology. 2020 Feb 19;:200432. doi: 10.1148/radiol.2020200432. [Published Ahead of Print] [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Huang P, Liu T, Huang L, et al. Use of chest CT in combination with negative RT-PCR assay for the 2019 novel coronavirus but high clinical suspicion. Radiology. 2020;295:22–23. doi: 10.1148/radiol.2020200330. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Li D, Wang D, Dong J, et al. False-negative results of real-time reverse-transcriptase polymerase chain reaction for severe acute respiratory syndrome coronavirus 2: role of deep-learning-based CT diagnosis and insights from two cases. Korean J Radiol. 2020;21:505–508. doi: 10.3348/kjr.2020.0146. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Xie X, Zhong Z, Zhao W, Zheng C, Wang F, Liu J. Chest CT for typical 2019-nCoV pneumonia: relationship to negative RT-PCR testing. Radiology. 2020. Feb 12, [Published Ahead of Print] [DOI] [PMC free article] [PubMed]
  • 14.Lin C, Ding Y, Xie B, et al. Asymptomatic novel coronavirus pneumonia patient outside Wuhan: The value of CT images in the course of the disease. Clin Imaging. 2020;63:7–9. doi: 10.1016/j.clinimag.2020.02.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Li K, Wu J, Wu F, et al. The clinical and chest CT features associated with severe and critical COVID-19 pneumonia. Invest Radiol. 2020;55:327–331. doi: 10.1097/RLI.0000000000000672. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Liu KC, Xu P, Lv WF, et al. CT manifestations of coronavirus disease-2019: A retrospective analysis of 73 cases by disease severity. Eur J Radiol. 2020;126:108941. doi: 10.1016/j.ejrad.2020.108941. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Zhao W, Zhong Z, Xie X, Yu Q, Liu J. Relation between chest CT findings and clinical conditions of coronavirus disease (COVID-19) pneumonia: a multicenter study. AJR Am J Roentgenol. 2020;214:1072–1077. doi: 10.2214/AJR.20.22976. [DOI] [PubMed] [Google Scholar]
  • 18.Chen Z, Fan H, Cai J, et al. High-resolution computed tomography manifestations of COVID-19 infections in patients of different ages. Eur J Radiol. 2020;126:108972. doi: 10.1016/j.ejrad.2020.108972. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Bernheim A, Mei X, Huang M, et al. Chest CT findings in coronavirus disease-19 (COVID-19): relationship to duration of infection. Radiology. 2020;295:200463. doi: 10.1148/radiol.2020200463. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Pan Y, Guan H, Zhou S, et al. Initial CT findings and temporal changes in patients with the novel coronavirus pneumonia (2019-nCoV): a study of 63 patients in Wuhan, China. Eur Radiol. 2020;30:3306–3309. doi: 10.1007/s00330-020-06731-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Pan F, Ye T, Sun P, et al. Time course of lung changes on chest CT during recovery from 2019 novel coronavirus (COVID-19) pneumonia. Radiology. 2020;295:715–721. doi: 10.1148/radiol.2020200370. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Wang Y, Dong C, Hu Y, et al. Temporal changes of CT findings in 90 patients with COVID-19 pneumonia: a longitudinal study. Radiology. 2020. Mar 19, [Published Ahead of Print] [DOI] [PMC free article] [PubMed]
  • 23.Chung M, Bernheim A, Mei X, et al. CT imaging features of 2019 novel coronavirus (2019-nCoV) Radiology. 2020;295:202–207. doi: 10.1148/radiol.2020200230. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Kanne JP. Chest CT findings in 2019 novel coronavirus (2019-nCoV) infections from Wuhan, China: key points for the radiologist. Radiology. 2020;295:16–17. doi: 10.1148/radiol.2020200241. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Song F, Shi N, Shan F, et al. Emerging 2019 novel coronavirus (2019-nCoV) pneumonia. Radiology. 2020;295:210–217. doi: 10.1148/radiol.2020200274. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Xu X, Yu C, Qu J, et al. Imaging and clinical features of patients with 2019 novel coronavirus SARS-CoV-2. Eur J Nucl Med Mol Imaging. 2020;47:1275–1280. doi: 10.1007/s00259-020-04720-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Ye Z, Zhang Y, Wang Y, Huang Z, Song B. Chest CT manifestations of new coronavirus disease 2019 (COVID-19): a pictorial review. Eur Radiol. 2020;30:4381–4389. doi: 10.1007/s00330-020-06801-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Zhou S, Wang Y, Zhu T, Xia L. CT Features of coronavirus disease 2019 (COVID-19) pneumonia in 62 patients in Wuhan, China. AJR Am J Roentgenol. 2020;214:1287–1294. doi: 10.2214/AJR.20.23154. [DOI] [PubMed] [Google Scholar]
  • 29.Arentz M, Yim E, Klaff L, et al. Characteristics and outcomes of 21 critically ill patients with COVID-19 in Washington State. JAMA. 2020;323:1612–1614. doi: 10.1001/jama.2020.4326. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.M-Y Ng, Lee EY, Yang J, et al. Imaging profile of the COVID-19 infection: radiologic findings and literature review. Radiology. 2020. Feb 13, [Published Ahead of Print] [DOI] [PMC free article] [PubMed]
  • 31.Yoon SH, Lee KH, Kim JY, et al. Chest radiographic and CT findings of the 2019 novel coronavirus disease (COVID-19): analysis of nine patients treated in Korea. Korean J Radiol. 2020;21:494–500. doi: 10.3348/kjr.2020.0132. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Gozes O, Frid-Adar m, Greenspan H, et al. Rapid AI development cycle for the coronavirus (COVID-19) pandemic: initial results for automated detection & patient monitoring using deep learning CT image analysis. ArXiv 2020;2003.05037v3. [Google Scholar]
  • 33.Li L, Qin L, Xu Z, et al. Artificial intelligence distinguishes COVID-19 from community acquired pneumonia on chest CT. Radiology. 2020. Mar 19, [Published Ahead of Print] [DOI] [PMC free article] [PubMed]
  • 34.Tárnok A. Machine learning, COVID-19 (2019-nCoV), and multi-OMICS. Cytometry Part A. 2020;97:215–216. doi: 10.1002/cyto.a.23990. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Li K, Fang Y, Li W, et al. CT image visual quantitative evaluation and clinical classification of coronavirus disease (COVID-19) Eur Radiol. 2020;30:4407–4416. doi: 10.1007/s00330-020-06817-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Ronneberger O, Fischer P, Brox T, editors. U-Net: Convolutional Networks for Biomedical Image Segmentation. Medical Image Computing and Computer-Assisted Intervention (MICCAI); 2015. pp. 234–241. [DOI] [Google Scholar]
  • 37.Shiraishi J, Katsuragawa S, Ikezoe J, et al. Development of a digital image database for chest radiographs with and without a lung nodule: receiver operating characteristic analysis of radiologists’ detection of pulmonary nodules. AJR Am J Roentgenol. 2000;174:71–74. doi: 10.2214/ajr.174.1.1740071. [DOI] [PubMed] [Google Scholar]
  • 38.Jaeger S, Candemir S, Antani S, Wáng YX, Lu PX, Thoma G. Two public chest X-ray datasets for computer-aided screening of pulmonary diseases. Quantitative imaging in medicine and surgery. 2014;4:475–477. doi: 10.3978/j.issn.2223-4292.2014.11.20. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Huang G, Liu Z, Van Der Maaten L, Weinberger KQ, editors. Densely connected convolutional networks. Proc IEEE Conf Comput Vis Pattern Recognit; 2017; pp. 4700–4708. [DOI] [Google Scholar]
  • 40.Wang X, Peng Y, Lu L, Lu Z, Bagheri M, Summers RM. ChestX-ray8: Hospital-scale chest X-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. IEEE conference on computer vision and pattern recognition; 2017; pp. 2097–2106. [DOI] [Google Scholar]

Articles from Diagnostic and Interventional Radiology are provided here courtesy of Turkish Society of Radiology

RESOURCES