Skip to main content
Elsevier - PMC COVID-19 Collection logoLink to Elsevier - PMC COVID-19 Collection
. 2021 Nov 23;146:110066. doi: 10.1016/j.ejrad.2021.110066

Performance of a computer aided diagnosis system for SARS-CoV-2 pneumonia based on ultrasound images

Shiyao Shang a,1, Chunwang Huang a,1, Wenxiao Yan a,1, Rumin Chen a, Jinglin Cao a, Yukun Zhang a, Yanhui Guo b,, Guoqing Du a,
PMCID: PMC8609670  PMID: 34902668

Abstract

Purpose

In this study we aimed to leverage deep learning to develop a computer aided diagnosis (CAD) system toward helping radiologists in the diagnosis of SARS-CoV-2 virus syndrome on Lung ultrasonography (LUS).

Method

A CAD system is developed based on a transfer learning of a residual network (ResNet) to extract features on LUS and help radiologists to distinguish SARS-CoV-2 virus syndrome from healthy and non-SARS-CoV-2 pneumonia. A publicly available LUS dataset for SARS-CoV-2 virus syndrome consisting of 3909 images has been employed. Six radiologists with different experiences participated in the experiment. A comprehensive LUS data set was constructed and employed to train and verify the proposed method. Several metrics such as accuracy, recall, precision, and F1-score, are used to evaluate the performance of the proposed CAD approach. The performances of the radiologists with and without the help of CAD are also evaluated quantitively. The p-values of the t-test shows that with the help of the CAD system, both junior and senior radiologists significantly improve their diagnosis performance on both balanced and unbalanced datasets.

Results

Experimental results indicate the proposed CAD approach and the machine features from it can significantly improve the radiologists’ performance in the SARS-CoV-2 virus syndrome diagnosis. With the help of the proposed CAD system, the junior and senior radiologists achieved F1-score values of 91.33% and 95.79% on balanced dataset and 94.20% and 96.43% on unbalanced dataset. The proposed approach is verified on an independent test dataset and reports promising performance.

Conclusions

The proposed CAD system reports promising performance in facilitating radiologists’ diagnosis SARS-CoV-2 virus syndrome and might assist the development of a fast, accessible screening method for pulmonary diseases.

Keywords: SARS-CoV-2 virus syndrome diagnosis, Computer aided diagnosis, Deep learning, Lung ultrasound

Abbreviations: CAD, Computer aided diagnosis; LUS, Lung ultrasonography; JR, Junior Radiologists; SR, Senior Radiologists

1. Introduction

SARS-CoV-2 virus syndrome pandemic has struck about 255 million people and is responsible for more than five million deaths [1]. It has led to unprecedented pressure on global healthcare services. A rapid and reliable diagnostic method for SARS-CoV-2 virus syndrome is a prerequisite for infectious disease control including isolation and treatment.

In the early stage of SARS-CoV-2 virus syndrome, reverse transcriptase polymerase chain reaction (RT-PCR) probably shows negative results. The false-negative rate decreases from 100% on day 1 to 20% on day 8 since exposure and symptom onset [2]. Imaging is an important supplementary method to indicate further RT-PCR testing among suspected SARS-CoV-2 virus syndrome patients. It may be particularly useful in Emergency Department for advanced patients triage and faster decision making while waiting for the RT-PCR results.

High-resolution computed tomography (HR-CT) in thoracic imaging was accepted as the gold standard imaging method of SARS-CoV-2 virus syndrome evaluation, on account of its high reliability [3]. However, for those critical patients in intensive care units (ICU) who are difficult to be transported, or for patients in low- and middle-income countries where a CT exam is unavailable, an alternative imaging technique is required. Furthermore, the high-dose radiation exposure of CT is also an important consideration for patients who need imaging follow-up repeatedly, children and pregnant women. Chest X-ray can be obtained easily with minimal dose radiation exposure. However, studies showed that the chest X-ray exam has a relatively low sensitivity and a considerable proportion of the SARS-CoV-2 virus syndrome patients have been misdiagnosed by chest X-ray [4], [5]. HR-CT and chest X-ray have limitations for SARS-CoV-2 virus syndrome evaluation, and supplementary method is a pressing demand in clinical practice.

Lung ultrasonography (LUS) has been used in different lung disease diagnoses for decades, such as interstitial lung disease, subpleural consolidations, and acute respiratory distress syndrome. A meta-analysis reported that the sensitivity and specificity for pneumonia diagnosis using LUS were as high as 94% (95% CI, 92–96%) and 96% (94–97%) [6]. During the progression of SARS-CoV-2 virus syndrome, the distal regions of the lung tended to be more frequently involved where can be detected easily and accurately by LUS [7]. The sonographic appearances were characterized with the pleural line irregularities and thickening, increment of B-line artifacts with severity and less frequently pleural effusion, and subpleural consolidations [8], [9].

Numerous studies investigated the diagnostic value of LUS in SARS-CoV-2 virus syndrome. Researchers have found a strong correlation between CT and LUS in detecting lung lesions of SARS-CoV-2 virus syndrome [10], [11]. In those studies, LUS adequately detected lung lesions in patients with SARS-CoV-2 virus syndrome. Compared with the chest X-ray, LUS provided a higher sensitivity in the diagnosis of SARS-CoV-2 virus syndrome (88.9% vs. 51.9%) [5]. A recent systematic review of 51 studies with 10,155 patients with SARS-CoV-2 infection evaluated the diagnostic value of thoracic CT, chest X-ray and ultrasound. The pooled sensitivities of thoracic CT, chest X-ray and LUS were 87.9%,80.6% and 86.4%, while the pooled specificities were 80.0%,71.5% and 54.6%, respectively. These findings demonstrate that chest CT, chest X-ray and ultrasound all have moderate sensitivity in diagnosing SARS-CoV-2 infection, with lower specificity for lung ultrasound and lower sensitivity for chest X-ray [12].

Since the ACR did not recommend the routine use of evaluating patients with suspected SARS-CoV-2 infection by CT [13], LUS is a promising supplementary method for SARS-CoV-2 virus syndrome evaluation with a lot of advantages: portable, nonradiative, ability on beside examination, minimizing the risk of occupational exposure and cross infection. It is particularly useful in ICU and Emergency Department for monitoring and triage of SARS-CoV-2 infection. Nevertheless, ultrasound quality is operator-dependent, and limited experience may lead to difficulties in utilizing this technique. Due to the massive increase in the patient population, healthcare professionals are thrown into a very high workload. Many physicians from different specialities are redeployed to treat patients with suspected or confirmed SARS-CoV-2 infection. Identifying and diagnosing SARS-CoV-2 infection with ultrasound data are challenging and time-consuming for novice utilizers. Computer aided diagnosis (CAD) system based on artificial intelligence (AI) provides an efficient way to assist physicians in disease diagnosis. CAD system is a promising tool to assist physicians in clinical decisions with higher accuracy, less complexity, and less time consumption.

Our study on CAD of SARS-CoV-2 virus syndrome is different from previous works. We not only simply propose a CAD system for LUS images and justify its performance on different datasets, but also verifies its efficiency on facilitating and improving the performance of physicians with different level of experiences. The new contribution of this study includes two folders: (1) a novel CAD system for SARS-CoV-2 virus syndrome diagnosis is proposed using a transfer learning on a deep residual convolutional neural network (CNN); (2) clinical experiments are studied to justify the performance improvement of radiologists with CAD’s help.

2. Materials and methods

2.1. Materials

Images are from a public dataset whose detailed information can be found at https://github.com/jannisborn/covid19_ultrasound/blob/master/data/README.md. There are 262 patients include 92 SARS-CoV-2 virus syndrome, 79 non-SARS-CoV-2 pneumonia and 90 healthy cases. All ultrasound videos are split into images at a frame rate of 3 Hz. A total of 3909 images were obtained from GitHub and the public dataset [14]. All samples have been categorized into three classes using the final diagnosis results, including 1664 SARS-CoV-2 virus syndrome images, 876 non-SARS-CoV-2 pneumonia images, and 1369 health images. The final diagnosis results were confirmed by nasopharyngeal RT-PCR for SARS-Cov2.

2.2. Proposed methods

In our study, a transfer learning based residual network was developed to extract features on LUS and help radiologists with SARS-CoV-2 virus syndrome diagnosis. The proposed methods are introduced as follows.

2.2.1. Proposed transfer learning network

Transfer learning is able to reduce the time in training and improves the network’s generalization ability. In this stage, rather than building a model from scratch, a pre-trained ResNet50 network, which was trained using ImageNet, is selected as a backbone to extract the features from LUS images. Inside it, a fully connected layer is modified to match the output numbers of classified categories, and a binary cross-entropy function is used as the loss function which computes the binary cross-entropy (BCE) between predictions and targets [15]. Compared with the mean square error (MSE) loss function, BCE can slow down the gradient dispersion and speed up the training process. The architecture of the pre-trained ResNet is modified by adding a fully connected layer for feature extraction and adding a global average pooling to interpret these features in the classification task. The idea is to generate one feature map for each corresponding category of the classification task in the last convolutional layer. The global average pooling is more native to the convolution structure by enforcing correspondences between feature maps and categories. Thus, the feature maps can be easily interpreted as categories confidence maps. Also, there is no parameter to optimize in the global average pooling thus overfitting is avoided.

2.2.2. Gradient-weighted class activation mapping

The proposed CAD system aims to provide both the classification results and the salient features on LUS images to help radiologists. The gradient-weighted class activation mapping (Grad-CAM) method creates an activation map that highlights the crucial areas [16]. In the Grad-CAM method, the gradients of the layers flowing into the final convolutional layer produce a rough localization map in which the important areas are highlighted. Grad-CAM uses the gradient information flowing into the last convolutional layer to assign significance values to each neuron which respondents to class-specific information in the image.

In the proposed CAD system, the Grad-CAM approach was employed to enhance the visuality by focusing attention on the critical lesion regions on US images. These regions where the CNN model was activated were colored from the lowest activation to the highest activation. The top three high activations are marked using three circles. The red circle represents the highest activation and the pink one is the second highest activation.

2.3. Platform settings

The modified ResNet50 is trained on a server with a 2 × Six-Core Intel Xeon processor and 128 GB of memory. The server is equipped with an NVIDIA Tesla K40 GPU with 12 GB of memory.

To verify the efficiency to help radiologists, a web-based graphic user interface (WGUI) is designed to provide a platform for radiologists to perform diagnoses on LUS images. This WGUI randomly selects images from the dataset, displays them in front of radiologists, and provides diagnosis options for radiologists to select. It also helps radiologists by providing the diagnosis results of CAD, displaying color Grad-CAM maps, and marking salient feature reference regions. An example is shown in Fig. 1 in demonstrating the diagnosis procedure with the WGUI.

Fig. 1.

Fig. 1

An example of the diagnosis procedure with the WGUI. (a) Diagnosis interface with the diagnosis results from the CAD. (b) Diagnosis interface with Gad-CAM map.

2.4. Image dataset settings

In the LUS dataset, to evaluate the classification performance on ratios of different categories, images have been selected twice to construct a balanced dataset and an unbalanced dataset, respectively.

In the balanced dataset where each category was represented by the same number of images, 2628 images were selected randomly from the whole dataset. In the unbalanced dataset, all 3909 images were selected from the original dataset. The dataset was split into the training, validation, and testing sets (Table 1 ).

Table 1.

Data-split table.

Group Balanced dataset
Unbalanced dataset
Categories S NS H S NS H
876 876 876 1664 876 1369
Training 1840 2736
Validation 263 391
Testing 525 782
Total 2628 3909

2.5. Evaluation metrics

To verify the performance of the transfer learning model, two commonly used machine learning algorithms have been employed to compare with the proposed model on the same dataset. In machine learning, the support vector machine (SVMs) [17] is one of the most robust supervised learning models for classification and regression analysis. An SVM maps training examples to points in space to maximize the width of the gap between the two categories. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall. The K-nearest neighbors (KNN) algorithm is a type of instance-based classification method where an unknown object is classified by a plurality vote of its neighbors, with the object being assigned to the class most common among its K nearest neighbors. In the parameters of KNN, 5 neighbors are selected, Euclidean distance is the distance metric, and all features are standardized in the range of [0, 1].

A confusion matrix is a table used to evaluate classification performance. Each row of the confusion matrix represents the instances of an actual class and each column represents the instances of a predicted class. In the figures of the confusion matrix, the rows correspond to the predicted class (Output Class) and the columns correspond to the true class (Target Class). The diagonal elements correspond to correctly classified observations, and the off-diagonal cells correspond to incorrectly classified observations. The numbers at the rightest column are the percentages of all the examples predicted to belong to each class that is correctly and incorrectly classified. The number at the bottom row are the percentages of all the examples belonging to each class that is correctly and incorrectly classified. Four parameters namely accuracy, precision, recall and F1-score were calculated and compared.

In addition, a statistic testing, a pair t-test is used to evaluate the difference between radiologists’ performances without CAD help and with CAD help. The p-values of the t-test show if there are significant improvements on diagnosis performances.

3. Results

We employed two groups of radiologists with different levels of experience to perform diagnoses with and without the help of the proposed CAD system on the platform. Among them, three junior radiologists had an average age of 26 years with less than three years’ experience in LUS diagnosis and three senior radiologists had an average age of 35 years with more than eight years’ experience in LUS diagnosis.

3.1. Evaluation results on the balanced dataset

In the balanced dataset where each category was represented by the same number of images, 2628 images were selected randomly from the whole dataset. In the balanced dataset, 70% of the patch images were used for training, 10% for validation, and 20% for testing. 1840 images were in the training set, 263 in the validation set, and 525 in the testing set.

Table 2 and Fig. 2 show the confusion matrices on the proposed CAD approach, SVM, and KNN methods. In the confusion matrix figures, the grey scales in each cell visualize the values in the cells. The higher, the larger the percentage values are. In the classification results of the proposed CAD, all samples are classified correctly and more accurately than other methods. The values of Accuracy, Recall, Precision, and F1-score are compared in Table 2. These results also indicate improvement of our proposed model over the SVM and KNN models in the evaluation metrics.

Table 2.

Comparison of confusion matrices between different machine learning methods and radiologists on the balanced dataset.

graphic file with name fx1_lrg.gif

Abbreviations: S: SARS-CoV-2 virus syndrome, NS: Non-SARS-CoV-2 pneumonia, H: Healthy, JR: Junior radiologists, SR: Senior radiologists.

Fig. 2.

Fig. 2

Confusion matrix of different algorithms on balanced data set. (a) CAD. (b) SVM. (c) KNN. Abbreviations:S:SARS-CoV-2 virus syndrome, NS:Non-SARS-CoV-2 pneumonia, H:Healthy.

The evaluation results of the radiologists’ performance on balanced dataset are summarized in Table 2 and Fig. 3 .Without the help of our proposal CAD system, only 79.0% and 86.3% SARS-CoV-2 virus syndrome were identified correctly by junior and senior radiologists, and this proportion increased to 93.3% and 95.2%, respectively with the help of our proposal CAD system. After using our proposal CAD system, both junior and senior radiologists achieved higher accuracy, precision, recall and F1-scores values.

Fig. 3.

Fig. 3

Confusion matrix of different radiologists on balanced data set. (a) Junior radiologists. (b) Junior radiologists with CAD. (c) Senior radiologists. (b) Senior radiologists with CAD. Abbreviations:S:SARS-CoV-2 virus syndrome, NS:Non-SARS-CoV-2 pneumonia, H:Healthy.

The details about the evaluation results of different models and radiologists on balanced dataset are listed in Table E1, E2 (supplementary material). The proposed CAD system achieves precision, recall, and F1-score values of all 100% for diagnosing SARS-CoV-2 virus syndrome, 97.14%, 100%, and 98.55% for non-SARS-CoV-2 pneumonia, and 100%, 97.22%, and 98.59% for healthy cases. With the assistance of the proposed CAD system, the precision, recall and F1 scores were higher for diagnosing SARS-CoV-2 virus syndrome, non-SARS-CoV-2 pneumonia, and healthy cases in both junior and senior radiologists’ groups. Line chart for different models and radiologist in diagnosing with LUS on balanced dataset is presented in Fig. 6 a. Our CAD system outperforms other classifiers and increases the performance of both junior and radiologists.

Fig. 6.

Fig. 6

Evaluation results for different classifiers and radiologists (a) Balanced dataset. (b) Unbalanced dataset.

3.2. Evaluation with the unbalanced dataset

In the unbalanced dataset, all 3909 images were selected from the original dataset. The same ratio (70%, 10%, 20%) was used to split the dataset into the training, validation, and testing sets. These splits were applied to each category. From the 3909 images, 2736 were selected for training, 391 for validation, and 782 for testing.

Table 3 and Fig. 4 show the confusion matrices and results of the evaluation metrics between our proposed model and the SVM and KNN methods on the unbalanced dataset. Our model and other classifiers all achieve high performance metrics.

Table 3.

Comparison of confusion matrices between different machine learning methods and radiologists on the unbalanced dataset.

graphic file with name fx2_lrg.gif

Abbreviations: S: SARS-CoV-2 virus syndrome, NS: Non-SARS-CoV-2 pneumonia, H: Healthy, JR: Junior radiologists, SR: Senior radiologists.

Fig. 4.

Fig. 4

Confusion matrix on the unbalanced data set. (a) Proposed CAD. (b) SVM. (c) KNN. Abbreviations:S:SARS-CoV-2 virus syndrome, NS:Non-SARS-CoV-2 pneumonia, H:Healthy.

Table 3 and Fig. 5 show the confusion matrix and results of different radiologists’ performance with and without the help of the proposed CAD system on unbalanced data. The details about the evaluation results of different models and radiologists on unbalanced dataset are listed in Table E3 and E4 (supplementary material). The proposed CAD system achieves precision, recall, and F1-score values of 98.20%,99.70% and 98.94 for diagnosing SARS-CoV-2 virus syndrome, 100%, 96.15%, and 98.04% for non-SARS-CoV-2 pneumonia, and 98.54%, 99.26%, and 98.90% for healthy cases. Again, the CAD system improved radiologists’ performance on unbalanced dataset. Line chart for different models and radiologist in diagnosing with LUS on unbalanced dataset is presented in Fig. 6 b. Both groups of radiologists achieved higher precision, recall, and F1-score values with the help of the CAD system.

Fig. 5.

Fig. 5

Confusion matrix of different radiologists on the unbalanced data set. (a) Junior radiologists. (b) Junior radiologists with CAD. (c) Senior radiologists. (b) Senior radiologists with CAD. Abbreviations:S:SARS-CoV-2 virus syndrome, NS:Non-SARS-CoV-2 pneumonia, H:Healthy.

Statistical testing experiments were taken to evaluate the performance differences of radiologists with and without the help of the CAD system and test if there are significant improvements between them. Four comparisons were taken on junior and senior radiologists on different datasets. The p-values of the t-test in Table 4 are all less than 0.05 and show that with the help of the CAD system, both junior and senior radiologists significantly improve their diagnosis performance on both balanced and unbalanced datasets.

Table 4.

t-test between performance of radiologists with and without CAD system.

Comparison p-value
Balanced dataset JR vs JR + CAD 4.64 × 10−2
SR vs SR + CAD 1.63 × 10−12
Unbalanced dataset JR vs JR + CAD 2.0 × 10−2
SR vs SR + CAD 2.01 × 10−2

Abbreviations; R: Junior radiologists, SR: Senior radiologists.

4. Discussion

Compared to the exploded deep learning studies of SARS-CoV-2 virus syndrome with thoracic CT or chest X-ray, relatively few researchers of the LUS CAD system can be found in the literature. Some work has been done on exploiting LUS image analysis and deep learning to detect and quantify B-line, extract pleural line, and subpleural lesions [18], [19], [20]. Born et al. trained a deep convolutional neural network on a lung ultrasound dataset consisted of 1103 images for SARS-CoV-2 virus syndrome detection, with a sensitivity of 0.96, a specificity of 0.79, and an F1-score of 0.92[21]. Roy et al. used a deep learning model to detect LUS imaging patterns associated with SARS-CoV-2 virus syndrome where the disease severity score was predicted, and the pathological artifacts of SARS-CoV-2 virus syndrome were located [22].

Our proposed CAD system using the transfer learning of ResNet is employed to identify SARS-CoV-2 virus syndrome on LUS images. The results demonstrate that the proposed system accomplished high classification precisions on both balanced and unbalanced dataset. The evaluation metrics results also justify the better performance of the proposed method than the existing classifiers. The experimental results on the ability to assist radiologists demonstrate that the CAD system can significantly improve the radiologists’ performance on the SARS-CoV-2 virus syndrome diagnosis.

Six radiologists participated in this study and were divided into junior and senior groups according to their experience on LUS diagnosis. Radiologists inside each group have similar prior knowledge levels, which can prevent bias due to knowledge inequalities. The performance in the identification of lung lesions of senior radiologists was better than junior ones, however, the accuracy of both groups was not high. With the assistance of the proposed CAD system, junior radiologists achieved an accuracy similar to seniors, and both accuracies are higher than 90%. This finding provides evidence to help more physicians without much experience in LUS, especially those from the emergency room and ICU to efficiently utilize LUS for SARS-CoV-2 virus syndrome evaluation and follow-up. During the pandemic of SARS-CoV-2 virus syndrome, this finding has much practical significance.

In our experiment, it was interesting that the performance of radiologists with the assistance of the CAD system on the diagnosis of SARS-CoV-2 virus syndrome was still inferior to the proposed CAD system, which was contrary to the experiment designer’s assumption. It might be due to the radiologists’ lack of clinical experience in the new disease, and less confidence in CAD. In some cases, it was difficult for radiologists to distinguish mild lesions in SARS-CoV-2 virus syndrome and healthy lungs. Our CAD system can detect the subtle differences between them. With the assistance of our proposed CAD system, physicians are released from complicated training and repetitive image reading tasks, and spare time for clinical decision making and treatment implementation.

Buonsenso et al. [23] introduced a specific LUS evaluation procedure for children with suspected SARS-CoV-2 virus syndrome. The clinical examination and lung imaging can be performed concomitantly by just two operators: one pediatrician and another assistant. It minimized the use of healthcare staff and medical image equipment, which reduce the risk of exposure of both clinicians and patients. Although further studies are required, we believe that with the help of our proposed CAD system, clinicians will make better use of LUS for SARS-CoV-2 virus syndrome evaluation and the novice clinicians on LUS will obtain high accuracy in identifying SARS-CoV-2 virus syndrome patients from healthy people.

A large quantity of research work on AI for SARS-CoV-2 virus syndrome has been published. Conversely, few of those systems were subsequently applied to the clinic. It is insufficient for clinical decision-making with the computer alone. Humans are the main subject of medical practice. Explanatory techniques were used in our proposed CAD system to facilitate the utilization of it. The output of our CAD system was presented as diagnosing results with probability, highlighted critical lesion regions, and Grad-CAM maps. Clinicians can enter the field easily and give their verdicts according to the visualized diagnostic basis by the CAD system as well as their own experience. In our study, images have been selected twice to construct a balanced dataset and an unbalanced dataset to evaluate the classification performance on ratios of different categories. The training, validation and testing dataset were disjoint. Radiologists who participated in the tests were with different experience on LUS diagnosis, and there were weeks between two tests to washout radiologists’ memory. The proposed system accomplished high classification precisions on both balanced and unbalanced dataset, and improved the performance of both senior and junior radiologists on different dataset. It demonstrates that our proposed CAD system has good reproducibility and generalization.

Another point worth considering is whether the CAD system is too sensitive for SARS-CoV-2 virus syndrome detection. Since most of the patients infected showed mild symptoms and can be monitored at home [24], it seems unnecessary to detect every minor lesion in SARS-CoV-2 virus syndrome. However, we still recommend the use of the CAD system. Firstly, because of the infectious nature of SARS-CoV-2 virus syndrome, it is very important to identify and isolate the infected patients. Moreover, some of the patients with moderate symptoms also have the possibility of progression to the severe situation. A fast and accurate method is essential for disease evaluation.

Our study has some limitations. Firstly, it was technically impossible to blind the radiologists to the method because they had to know whether to make a diagnosis with or without the CAD system. The level of trust in the CAD system may influence the diagnosis. Even so, the performance of all the radiologists were significantly improved after using the CAD system. In this study, the images obtained from GitHub and the public dataset are inevitably heterogeneous. The ultrasound devices, gain, and imaging depths were diverse. Clinical information such as age, gender, and symptoms is unknown. Ultrasonography is operator-dependent, placing a barrier for the objective analysis. Because our CAD system demonstrated its accuracy in evaluating LUS data of SARS-CoV-2 virus syndrome with both balanced and unbalanced datasets, it’s reasonable to presume that it is easily generalized to analyze other LUS datasets, and eventually provides clinicians actionable clues for therapy strategy.

Our proposed CAD system on LUS images improved the SARS-CoV-2 virus syndrome diagnosis by using the transfer learning of ResNet and help improving radiologists’ performance. In the future, we plan to apply our proposed method to a large dataset with a greater diversity of classes and a more varied imaging environment. With the high generalization ability of our method, we expect that it can be easily extended to similar applications in different diseases with few adaptations.

Funding

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

CRediT authorship contribution statement

Shiyao Shang: Writing – original draft. Chunwang Huang: Resources, Visualization. Wenxiao Yan: Project administration. Rumin Chen: Validation. Jinglin Cao: Investigation. Yukun Zhang: Formal analysis. Yanhui Guo: Software, Data curation. Guoqing Du: Conceptualization, Methodology, Supervision, Writing – review & editing.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Footnotes

Appendix A

Supplementary data to this article can be found online at https://doi.org/10.1016/j.ejrad.2021.110066.

Appendix A. Supplementary material

The following are the Supplementary data to this article:

Supplementary Data 1
mmc1.docx (25.1KB, docx)

References

  • 1.Available via https://coronavirus.jhu.edu/map.html (accessed on 18 Nov 2021).
  • 2.Kucirka L.M., Lauer S.A., Laeyendecker O., Boon D., Lessler J. Variation in false-negative rate of reverse transcriptase polymerase chain reaction-based SARS-CoV-2 tests by time since exposure. Ann. Intern. Med. 2020;173(4):262–267. doi: 10.7326/M20-1495. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Ai T., Yang Z., Hou H., Zhan C., Chen C., Lv W., Tao Q., Sun Z., Xia L. Correlation of chest CT and RT-PCR testing in coronavirus disease 2019 (COVID-19) in China: a report of 1014 cases. Radiology. 2020 doi: 10.1148/radiol.2020200642. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Kim E.S., Chin B.S., Kang C.K., Kim N.J., Kang Y.M., Choi J.P., Oh D.H., Kim J.H., Koh B., Kim S.E., Yun N.R., Lee J.H., Kim J.Y., Kim Y., Bang J.H., Song K.H., Kim H.B., Chung K.H., Oh M.D.C. Korea national committee for clinical management of, clinical course and outcomes of patients with severe acute respiratory syndrome coronavirus 2 infection: a preliminary report of the first 28 patients from the Korean cohort study on COVID-19. J. Korean Med. Sci. 2020;35(13) doi: 10.3346/jkms.2020.35.e142. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Pare J.R., Camelo I., Mayo K.C., Leo M.M., Schechter-Perkins E.M. Point-of-care lung ultrasound is more sensitive than chest radiograph for evaluation of COVID-19. West. J. Emerg. Med. 2020;21(4):771–778. doi: 10.5811/westjem.2020.5.47743. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Chavez M.A., Shams N., Ellington L.E., Naithani N., Gilman R.H., Steinhoff M.C., Santosham M., Black R.E., Price C., Gross M., Checkley W. Lung ultrasound for the diagnosis of pneumonia in adults: a systematic review and meta-analysis. Respir. Res. 2014;15(1) doi: 10.1186/1465-9921-15-50. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Soldati G., Smargiassi A., Inchingolo R., Buonsenso D., Demi L. Is there a role for lung ultrasound during the COVID-19 pandemic? Lung Ultrasound (LUS) findings in Covid-19. J. Ultrasound Med.: Off. J. Am. Inst. Ultras. Med. 2020;39(7):1459–1462. doi: 10.1002/jum.15284. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Kumar A.D., Hung S.C., Duanmu Y., Graglia S., Kugler J. Lung ultrasound findings in patients hospitalized with Covid-19 [published online ahead of print] J. Ultras. Med. 2021 doi: 10.1002/jum.15683. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Xing C., Li Q., Du H., Kang W., Lian J., Yuan L. Lung ultrasound findings in patients with COVID-19 pneumonia. Critical Care. 2020;24(1):174. doi: 10.1186/s13054-020-02876-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Poggiali E., Dacrema A., Bastoni D., Tinelli V., Demichele E., Ramos P.M., Silva M., Vercelli A., Magnacavallo A. Can lung US help critical care clinicians in the early diagnosis of novel coronavirus (COVID-19) pneumonia? Radiology. 2020;295(3) doi: 10.1148/radiol.2020200847. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Ökmen K., Yıldız D.K., Soyaslan E. Comparison of lung ultrasonography findings with chest computed tomography results in coronavirus (COVID-19) pneumonia. J. Med. Ultrason. 2021;48(2):245–252. doi: 10.1007/s10396-021-01081-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Islam N., Ebrahimzadeh S., Salameh J.P., Kazi S., Fabiano N., Treanor L., Absi M., Hallgrimson Z., Leeflang M.M., Hooft L., van der Pol C.B., Prager R., Hare S.S., Dennie C., Spijker R., Deeks J.J., Dinnes J., Jenniskens K., Korevaar D.A., Cohen J.F., Van den Bruel A., Takwoingi Y., van de Wijgert J., Damen J.A., Wang J., McInnes M.D., Cochrane C.-D.T.A.G. Thoracic imaging tests for the diagnosis of COVID-19. Cochrane Database Syst. Rev. 2021;3:CD013639. doi: 10.1002/14651858.CD013639.pub4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.American College or Radiology. ACR Recommendations for the Use of Chest Radiography and Computed Tomography (ct) for Suspected Covid-19 Infection. Available via https://www.acr.org/Advocacy-and-Economics/ACR-Position-Statements/Recommendations-for-Chest-Radiography-and-CT-for-Suspected-COVID19-Infection.
  • 14.Available via https://github.com/jannisborn/covid19_ultrasound.
  • 15.A. Buja, W. Stuetzle, Y. Shen, Loss functions for binary class probability estimation and classification: structure and application, 2005, pp. 119. https://doi.org/10.1.1.184.5203.
  • 16.Selvaraju R.R., Cogswell M., Das A., Vedantam R., Parikh D., Batra D. Grad-CAM: visual explanations from deep networks via gradient-based localization. IEEE Int. Conf. Comput. Vision. 2017:618–626. doi: 10.1109/ICCV.2017.74. [DOI] [Google Scholar]
  • 17.Cortes C., Cortes C., Vapnik V., Llorens C., Vapnik V.N., Cortes C., Côrtes M. Support-vector networks. Mach. Learn. 1995;20(3):273–297. doi: 10.1023/A:1022627411411. [DOI] [Google Scholar]
  • 18.Sloun R., Demi L. Localizing B-lines in lung ultrasonography by weakly supervised deep learning, in-vivo results. IEEE J. Biomed. Health Inform. 2020;24(4):957–964. doi: 10.1109/JBHI.2019.2936151. [DOI] [PubMed] [Google Scholar]
  • 19.Carrer L., Donini E., Marinelli D., Zanetti M., Mento F., Torri E., Smargiassi A., Inchingolo R., Soldati G., Demi L., Bovolo F., Bruzzone L. Automatic pleural line extraction and COVID-19 scoring from lung ultrasound data. IEEE Trans. Ultrason. Ferroelectr. Freq. Control. 2020;67(11):2207–2217. doi: 10.1109/TUFFC.2020.3005512. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Xu Y., Zhang Y., Bi K., Ning Z., Xu L., Shen M., Deng G., Wang Y. Boundary restored network for subpleural pulmonary lesion segmentation on ultrasound images at local and global scales. J. Digit. Imaging. 2020;33(5):1155–1166. doi: 10.1007/s10278-020-00356-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.J. Born, G. Brndle, M. Cossio, M. Disdier, N. Wiedemann, POCOVID-Net: Automatic Detection of COVID-19 From a New Lung Ultrasound Imaging Dataset (POCUS), arXiv:2004.12084, 2021. (latest version).
  • 22.Roy S., Menapace W., Oei S., Luijten B., Demi L. Deep learning for classification and localization of COVID-19 markers in point-of-care lung ultrasound. IEEE Trans. Med. Imaging. 2020;1(99):2676–2687. doi: 10.1109/TMI.2020.2994459. [DOI] [PubMed] [Google Scholar]
  • 23.Buonsenso D., Pata D., Chiaretti A. COVID-19 outbreak: less stethoscope, more ultrasound. Lancet Respir. Med. 2020;8(5):e27. doi: 10.1016/S2213-2600(20)30120-X. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Samudrala P.K., Kumar P., Choudhary K., Thakur N., Wadekar G.S., Dayaramani R., Agrawal M., Alexander A. Virology, pathogenesis, diagnosis and in-line treatment of COVID-19. Eur. J. Pharmacol. 2020;883:173375. doi: 10.1016/j.ejphar.2020.173375. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary Data 1
mmc1.docx (25.1KB, docx)

Articles from European Journal of Radiology are provided here courtesy of Elsevier

RESOURCES