Abstract
COVID-19 has killed more than 5 million individuals worldwide within a short time. It is caused by SARS-CoV-2 which continuously mutates and produces more transmissible new different strains. It is therefore of great significance to diagnose COVID-19 early to curb its spread and reduce the death rate. Owing to the COVID-19 pandemic, traditional diagnostic methods such as reverse-transcription polymerase chain reaction (RT-PCR) are ineffective for diagnosis. Medical imaging is among the most effective techniques of respiratory disorders detection through machine learning and deep learning. However, conventional machine learning methods depend on extracted and engineered features, whereby the optimum features influence the classifier's performance. In this study, Histogram of Oriented Gradient (HOG) and eight deep learning models were utilized for feature extraction while K-Nearest Neighbour (KNN) and Support Vector Machines (SVM) were used for classification. A combined feature of HOG and deep learning feature was proposed to improve the performance of the classifiers. VGG-16 + HOG achieved 99.4 overall accuracy with SVM. This indicates that our proposed concatenated feature can enhance the SVM classifier's performance in COVID-19 detection.
Keywords: COVID-19 detection, Machine learning, Deep learning, Feature extraction, HOG
1. Introduction
More than five million people have been killed by COVID-19, a new viral infection brought on by severe acute respiratory syndrome coronavirus2 (SARS-CoV-2) [1]. SARS-CoV-2 was initially detected in Wuhan, China, in December 2019, and its rapid global spread led to a global pandemic proclamation by the World Health Organization on March 11, 2020. SARS-CoV2 can continue to evolve after settling into its host cell, giving rise to more contagious and dangerous variants including omicron and delta (B.1.617.2) [2]. Patients with COVID-19 infection exhibit clinical symptoms that are comparable to some viral upper respiratory infections including respiratory syncytial virus (RSV) influenza and bacterial pneumonia [3]. One of these symptoms is bilateral lung infiltrates detectable on medical imaging, which can lead to chronic lung damage and death if not diagnosed and treated immediately.
Medical imaging like Computed Tomography (CT) and X-ray is another crucial technique in COVID-19 diagnosis [4] which can alternate the Polymerase Chain Reaction (PCR) methods that are time-consuming and report many false negative results [5,6]. Polymerase chain reaction (PCR) is a laboratory method used to rapidly produce (amplify) millions to billions of copies of a specific fragment of DNA. PCR detects the existence of the actual virus's genetic material or its fragments. Reverse-transcription PCR, real-time PCR as well as reverse transcription loop-mediated isothermal amplification are the current coronavirus diagnoses [7]. Furthermore, chest CT imaging being a cross-sectional 3D imaging technique was described to possess an enhanced density resolution compared to the chest X-ray which is a 2D imaging technique. Thus, Chest CT imaging can clearly show early, minor inflammatory alterations in the lung [4]. However, medical professionals find it difficult to differentiate between COVID-19 pneumonia and other viral and bacterial pneumonia. To make it easier to detect COVID-19-positive instances, computer-aided diagnostic methods are being proposed.
Computer-aided diagnostic (CAD) technologies can make use of artificial intelligence (AI) to classify various complicated structures present in medical images [8,9]. As a result, AI specifically Deep Learning (DL) and Machine Learning (ML) has long been studied for radiological diagnosis and recommended for more precise and faster COVID-19 detection. AI has also shown promising results in the detection of other diseases such as breast cancer [10], tuberculosis [11], and diabetic foot ulcer [12]. The three classes of CT images typically employed in COVID-19 classification are coronavirus, pneumonia, and normal. These images are shown in Fig. 1a, b, and 1c respectively. The CT image datasets that are publicly available include [13] containing 349 COVID-19 positive images and 397 healthy individual images [14], containing 371 COVID-19 positive images and 328 common pneumonia images [15], containing 1252 COVID-19 positive images and 1229 healthy individual images and [16] containing 349 COVID-19 positive images among others.
Fig. 1.
Chest CT scan images. (a) COVID-19; (b) common pneumonia (c) healthy individuals.
For a more rapid COVID-19 classification, different feature extraction methods have been used to improve the classical machine learning algorithms. These techniques include Histogram of Oriented Gradients (HOG), Local Binary Pattern (LBP), Scale-Invariant Feature Transform (SIFT), Grey Level Co-occurrence Matrix (GLCM) etcetera. HOG is a feature descriptor used in the extraction of features from images. A combination of HOG and ML algorithms can be used for a complex detection task. LBP is an efficient texture image descriptor that thresholds the neighboring pixels based on the current pixel value [17]. SIFT is used in the detection and description of an image's local features [18]. GLCM is used for texture analysis and has numerous applications, particularly in medical image analysis [19].
1.1. Related works
Support Vector Machine (SVM), Random Forest (RF), K-nearest neighbors (KNN), as well as other classical Machine Learning (ML) algorithms, have been used in Covid-19 detection. However, these algorithms depend on extracted and engineered features whereby the optimum features influence the classifier's performance [20]. Consequently, researchers have used various feature extraction techniques to improve the traditional machine learning algorithms in COVID-19 classification. COVID-19 classification with SVM has improved by 99% highest accuracy through the combination of various feature extraction techniques such as LBP, GLCM, and HOG [21].
Through the use of Extended segmentation-based fractal texture analysis along with the Discrete wavelet transform method for pertinent feature extraction [22], introduced an automatic technique for COVID-19 detection on CT images. The optimal features were chosen and combined with entropy entropy-controlled genetic algorithm and serial approach respectively. The chosen features were passed through different classifiers for detection. The naive Bayes classifier attained 92.6% accuracy which was the overall accuracy. In addition, HOG, SIFT, GLCM, and LBP were employed to extract features by Ref. [23] for COVID-19 detection on the images of CT scans and X-rays. The most efficient methods were HOG and LBP whose features were subjected to Bag of Tree, Kernel Extreme Learning Machine, KNN, and SVM. LBP + SVM attained 99.02% as an overall accuracy.
The Deep Learning technique has been demonstrated as an appropriate feature extractor in various computer vision approaches which can be employed in the improvement of classification accuracies. VGG16, VGG19, ResNet50 and AlexNet deep learning pretrained models were employed by Ref. [24] for COVID-19 detection on CT scan images after denoising the images with anisotropic diffusion techniques. From the result of their experiment, VGG19 outperformed the other pre-trained model with an accuracy of accuracy 98.06%. Furthermore, CNN models beneficial application in COVID-19 classification was testified in the study of [25]. VGG network, precisely VGG16 in combination with INCA feature selection has shown highest performance of 99.14% accuracy among the thirteen various deep CNN model network for COVID-19, Viral Pneumonia and normal individual classification on lung X-ray images [26]. proposed a dual-stage deep CNN, SKICU-Net and P-DenseCOVNet for automatic segmentation and efficient feature extraction from X-ray chest scans for the diagnosis of COVID-19 and pneumonia. Their proposed model achieved 93.8% accuracy. Furthermore, pre-trained CNN have been combined with pyramid MLP-mixer module for COVID-19, other pneumonia and normal patient classification on X-ray chest scans. An accuracy of 98.3% was attained by this model [27].
Deep Learning has been integrated to enhance COVID-19 detection by traditional machine learning methods. In the research of [28], extracted features of EfficientNet pre-trained models were first merged with RF and SVM and later employed logistic regression classifier for COVID-19 classification. Compared to the related pre-trained models, a better accuracy of 99% was achieved by this method. Hybrid-Patch-Alex, another deep feature engineering network was developed by Ref. [29] for the diagnosis of COVID-19, heart failure (HF), chronic obstructive pulmonary disease (COPD) and normal class on chest CT scan and chest X-ray datasets. The Hybrid-Patch-Alex model is composed of a hybrid patch division technique, iterative neighborhood component analysis, AlexNet, pre-trained model and artificial neural network, SVM and KNN classifiers. This model attained 99.82% accuracy with SVM and KNN classifiers on the individual datasets.
In recent times, deep learning techniques joined with handcrafted feature extraction techniques have been used to enhance conventional machine learning methods for COVID-19 classification [30]. combined handcrafted and deep convolutional features to classify COVID-19, other pneumonia and normal patient on chest X-ray images. This feature combination achieved 98.8% accuracy with an SVM classifier. In the research of [31], COVID-19, common pneumonia, as well as normal cases were classified on Computed Tomography scan images by combining seven DL models’ extracted features with LBP to enhance the KNN and SVM classifier performance. The combined features outperformed the related existing work with an accuracy of 99.4% attained by VGG-19 + LBP. In the recent study of [32], a Swin-textural model was developed to assess swin architecture performance in feature engineering. The textural features were extracted from the datasets with local phase quantization operations and LBP. The KNN classifier was used to analyze the best feature vectors selected by Iterative neighborhood component analysis while greedy algorithm was used to establish the best result from the classification COVID-19, normal lung and other non-COVID lung diseases on chest CT images. An accuracy of 98.71% was achieved by Swin-textural which outperformed the state-of-the-art deep learning models.
1.2. Contributions
In this study, a new concatenated handcrafted and automatic feature extraction technique for COVID-19 classification was explored. We combined the Histogram of Oriented Gradients (HOG) and eight CNN pre-trained models; GoogleNet, ShuffleNet, MobileNetv2, ResNet18, ResNet50, ResNet101, VGG-16, VGG-19 extracted features to train SVM and KNN for classification of COVID-19 pneumonia, common pneumonia as well as normal cases on computed tomography images to improve COVID-19 classification. This study's contribution includes:
-
•
A hybrid HOG and CNN feature for the training of SVM and KNN classifiers was proposed. This will enhance classification performance by enabling the classifiers to learn from the combined data.
-
•
Integration of Handcrafted and Deep Features: The study creatively blends eight pre-trained Convolutional Neural Network (CNN) models' automated deep features with handcrafted features, namely Histogram of Oriented Gradients (HOG). The goal of this combination of deep and handmade features is to take use of the complementing data that these various feature extraction methods manage to acquire.
-
•
Several Pre-trained CNN Models: Eight pre-trained CNN models—ShuffleNet, GoogleNet, MobileNetv2, ResNet18, ResNet50, ResNet101, VGG-16, and VGG-19 are used in this work. Because of this variability, a thorough examination of several topologies that each capture varying degrees of hierarchical aspects is possible. Therefore, the study's robustness is increased by using numerous pre-trained models.
2. Materials and methods
2.1. Data retrieval and processing
Three CT scan image datasets containing three classes; 328 common pneumonia, 1972 COVID-19, and 1608 healthy individuals were obtained. These image samples are shown in Fig. 1 above. The first dataset contains 349 COVID-19-positive images and 397 healthy individual images [13], the second dataset contains 371 COVID-19-positive and 328 common pneumonia [14] and the third dataset contains 1252 COVID-19-positive and 1229 healthy individuals [15]. The images were preprocessed by resizing them to 224 x 224 and further augmented the images offline by performing RandRescale, RandXReflection, RandRotation, RandXTranslation, and RandYTranslation to generate 10,000 per class(original images included) to increase the sample size of the training data and reduce overfitting [33]. The final size of the training set from each class was 8,000, making the total training size 24,000 images.
2.2. Models
2.2.1. Support Vector Machine (SVM)
Support Vector Machine (SVM) is the algorithm I employed to demonstrate supervised learning. SVM obliges that the positive and the negative labels have the values of +1 and −1 respectively. Having a learning algorithm and a dataset, the learning algorithm can then be applied to the dataset for model building. SVM generates a hyperplane that separates two input classes with the highest margin [34]. Where the margin represents the difference between the support vectors and the hyperplane.
2.2.2. K-nearest neighbors (KNN)
K-Nearest Neighbors (KNN) is an instance-based and non-parametric learning algorithm. KNN utilizes the whole dataset as the model. After building the model, KNN does not discard the training data unlike the other learning algorithms but rather retains all training examples in memory. When, a new, non-existing example x is encountered, it finds the closest k training example to x and returns the average or majority label in regression or classification case respectively [34].
2.3. Feature extraction
2.3.1. Histogram of Oriented Gradients (HOG)
Histogram of Oriented Gradients (HOG) is a feature descriptor often applied to extract features from image data. HOG is a handcrafted feature extraction technique that can be employed for a very difficult detection task along with ML algorithm [35,36]. HOG uses dense uniformly spaced grids and local contrast normalization for calculation and accuracy improvement respectively which distinguishes it from other feature descriptors such as shape contexts, edge orientation histograms, and SIFT. HOG is calculated in a sliding window fashion inside a region and has a semi-global representation benefit. It calculates the edges of the image using a Canny edge detector after dividing the image into smaller cells. Then, the orientation of each edge pixel is mathematically determined and the histogram is generated by adding up the gradients [37]. x and y direction gradients are beneficial features on which complex shapes like corners and edges can be represented, as shown in equation (1):
| (1) |
where gx is the image derivative with respect to x whereas x and gy is the image derivative with respect to y.
2.3.2. Convolutional neural network (CNN)
Convolutional neural network (CNN) is among the most widely used deep learning (DL) techniques which perform better than several other methods in the recognition of an image [38] rendering DL common recently. CNN has several hidden layers and copies the visual cortex of the brain in recognition and processing of an image which makes it different from other neural networks [39]. Images are converted by the convolution layer through convolution operations like a pool of digital filters then the pooling layer shrinks the image dimension by merging the neighboring pixels to form a single pixel [34]. There are various architectures of CNN with very similar basic components, normally made up of three layers which are: convolution, pooling, and fully connected layers. The CNN pre-trained models used in this study are GoogleNet, ShuffleNet, MobileNetv2, ResNet18, ResNet50, ResNet101, VGG-16, and VGG-19 with 22, 50, 28, 18, 50, 101, 16 and 19 layers respectively.
Fine-tuning the convolutional neural network of pre-trained models is possible on medical image datasets, to enable the big networks to learn a particular part of the desired task [40]. Pre-trained models’ transfer learning has been proven by various studies to be efficient and effective for the classification of medical images [41,42]. Efficient performance has also been observed on pre-trained models when a little dataset was used unlike models constructed from scratch which need a huge amount of dataset for optimal performance [43].
2.4. COVID-19 detection
Handcrafted HOG feature, as well as automatic deep features from eight CNN pre-trained models; ShuffleNet, GoogleNet, MobileNetv2, ResNet18, ResNet50, ResNet101, VGG-16, VGG-19, were extracted in this study. 80% and 20% of the data were utilized for training and testing respectively. Two classifiers; KNN and SVM were used for model training, which was performed in three phases. In the initial phase, handcrafted features were extracted with HOG while SVM and KNN were used to classify them. In the next phase, the eight pre-trained models were employed in the extraction of the automatic deep features, and the two classifiers were trained with each network's last pooling layer features. In the last phase, the features from the HOG and that from each of the eight pre-trained models were combined and used to train the two classifiers. The above method for the classification of COVID-19 is shown in Fig. 2.
Fig. 2.
COVID-19 classification process.
Steps of the work can be summarized as follows:
-
1.
Data Preparation: The study uses CT scan images to classify individuals into three groups: healthy, COVID-19, and common pneumonia. After resizing the data and augmentation, 80% of the dataset is used for training, while the remaining 20% is used for testing.
-
2.
Phase 1 of feature extraction involves creating HOG features from the CT scan images.
-
3.
Classifier Training (Phase 1): Using the manually created HOG features, two classifiers, KNN and SVM were utilized. The classifiers are taught to discriminate between healthy patients, COVID-19, and common pneumonia.
-
4.
Phase 2: Automatic Extraction of Features: ShuffleNet, GoogleNet, MobileNetv2, ResNet18, ResNet50, ResNet101, VGG-16, and VGG-19 are the eight pre-trained Convolutional Neural Network (CNN) models that are used in this phase of feature extraction.
-
5.
Classifier Training (Phase 2): Each pre-trained CNN model's final pooling layer generates the retrieved automated deep features. These features are used to train KNN and SVM classifiers, which then use them to categorize CT scan images into the right classifications.
-
6.
Phase 3: Feature Combination: In this last stage, features from HOG and each of the eight pre-trained CNN models are merged. Concatenated feature sets are produced as a result, improving the data representation by combining both manually created and automatically generated deep features.
-
7.
Classifier Training (Phase 3): Involves retraining the KNN and SVM classifiers with the concatenated features from the HOG and the pre-trained CNN models.
-
8.
Evaluation: Using the testing dataset, the trained classifiers from each phase are assessed. To evaluate how well the models classify CT scan images, performance measures including accuracy, sensitivity, specificity, F1 score, precision, Youden Index, and Area Under the Curve (AUC) are calculated.
3. Results
In this study, handcrafted HOG features as well as automatic deep features of eight CNN pre-trained models were extracted and utilized to train SVM and KNN classifiers for the classification of common pneumonia, COVID-19, and healthy individuals CT scan images. Handcrafted HOG features as well as automatic deep features of eight pre-trained CNN models were extracted prior to the training of the classifiers. The results obtained are presented in the tables and figures below. In Table 1, the SVM and KNN classifier performances were compared to ascertain the model that performs best with HOG features. An accuracy, sensitivity, specificity, precision, F1 score, AUC, and Yonden index of 97.8%, 100%, 95.5%, 98.1%, 99%, 97.7%, and 95.5% respectively was attained by the KNN which outperformed the SVM as shown in Fig. 3.
Table 1.
KNN and SVM performance on features extracted with HOG.
| Models | Accuracy (%) | Sensitivity (%) | Specificity (%) | F1 score (%) | Precision (%) | Yonden index (%) | AUC (%) |
|---|---|---|---|---|---|---|---|
| HOGKNN | 97.8 | 100 | 95.5 | 99 | 98.1 | 95.5 | 97.7 |
| HOGSVM | 95 | 95.7 | 94.2 | 95.9 | 96.1 | 89.9 | 94.9 |
Fig. 3.
Comparison of KNN and SVM performance on features extracted with HOG.
The remarkable accuracy of 97.8% attained by HOGKNN indicates that the KNN classifier did a remarkable job of accurately identifying the different CT scan images. Additionally, HOGSVM showed a high accuracy of 95%, demonstrating its effectiveness in class differentiation. The percentage of actual positive instances that are accurately detected is measured by sensitivity, which is sometimes referred to as recall or true positive rate. HOGKNN demonstrated complete sensitivity (100%), meaning that all cases of COVID-19, common pneumonia, and healthy patients were effectively recognised. With a somewhat lower sensitivity of 95.7%, HOGSVM appeared to have fewer false negatives.
The percentage of actual negative cases that are accurately detected is measured by specificity. With HOGSVM scoring 94.2% and HOGKNN scoring 95.5%, both models demonstrated strong specificity. The harmonic mean of sensitivity and precision is the F1 score. With an astounding F1 score of 99%, HOGKNN demonstrated a good balance between sensitivity and precision. Additionally, HOGSVM had a high F1 score of 95.9%, highlighting its capacity to strike a favourable balance between recall and precision. Accuracy of positive predictions provided by the models is reflected in precision. HOGKNN and HOGSVM both showed excellent precision at 98.1% and 96.1%, respectively. KNN outperformed SVM in the comparison utilizing HOG features. Since HOG features are created manually and effectively capture local texture patterns, they are appropriate for KNN's instance-based method. On the other hand, SVM is a classifier that seeks to identify the best hyperplane to divide several classes.
In order to classify CT scan pictures into common pneumonia, COVID-19, and healthy categories, Table 2 presents the performance metrics of a KNN classifier applied to features extracted with various Convolutional Neural Network (CNN) architectures.
Table 2.
Performance of the KNN classifier on features extracted with CNN.
| Models | Accuracy (%) | Sensitivity (%) | Specificity (%) | F1 score (%) | Precision (%) | Yonden index (%) | AUC (%) |
|---|---|---|---|---|---|---|---|
| GoogleNet | 97.4 | 98.1 | 96.6 | 98 | 97.9 | 94.7 | 97.3 |
| MobileNetv2 | 96.3 | 96.9 | 95.7 | 97 | 97.2 | 92.6 | 96.2 |
| ResNet-18 | 96.3 | 97.7 | 94.9 | 97.4 | 97.1 | 92.6 | 96.2 |
| ResNet-50 | 97.8 | 98.4 | 97.2 | 98.3 | 98.3 | 95.6 | 97.7 |
| ResNet-101 | 96 | 97.5 | 94.4 | 97.1 | 96.8 | 91.9 | 95.9 |
| ShuffleNet | 96 | 98.4 | 93.6 | 97.6 | 96.8 | 92 | 95.9 |
| VGG-16 | 98.4 | 99 | 97.9 | 98.9 | 98.8 | 96.9 | 98.3 |
| VGG-19 | 98.3 | 98.4 | 98.3 | 98.5 | 98.7 | 96.7 | 98.2 |
The accuracy results show consistently strong performance, ranging from 96% to 98.4% across multiple CNN models. The two models with the highest accuracy, VGG-16 and VGG-19, were 98.4% and 98.3%, respectively. With the extracted features of VGG-16, KNN attained an accuracy, sensitivity, specificity, precision, F1 score, AUC, and Yonden index of 98.4, 99%, 97.9%, 98.8%, 98.9%, 98.3%, and 96.9% respectively. The sensitivity of all CNN models was strong, ranging from 96.9% to 99%. Also, the range of specificity values was 93.6%–98.3% for all models. Although the highest values for sensitivity and specificity were attained by VGG-16 and VGG-19. All models have F1 scores that range from 97% to 98.9%, which are consistently high. The two groups with the highest F1 scores were VGG-16 and VGG-19. The range of the Youden Index for all models is 91.9%–96.9%. AUC values span from 95.9% to 98.3%, signifying the overall performance of the models. Likewise, the highest AUC values were attained by VGG-16 and VGG-19.
Table 2 presents the effectiveness of the KNN classifier in classifying CT scan images associated with COVID-19, common pneumonia, and healthy patients based on features retrieved by various CNN architectures. Based on reported performance metrics, VGG-16 and VGG-19 appear to be especially viable designs for this purpose. Similarly, the highest performance from both KNN and SVM classifiers was attained with VGG-16 extracted features in relation to the accuracy, sensitivity, precision, F1 score, AUC, and Yonden index as shown in Fig. 4, Fig. 5.
Fig. 4.
CNN extracted features with KNN classifier performance.
Fig. 5.
CNN extracted features with SVM classifier performance.
Table 3 displays the performance metrics of a Support Vector Machine (SVM) classifier that was used to classify CT scan pictures into common pneumonia, COVID-19, and healthy categories based on features collected using different CNN architectures. With the extracted features of VGG-16, SVM attained an accuracy, sensitivity, specificity, precision, F1 score, AUC, and Yonden Index of 98.2%, 99.4%, 97%, 98.5%, 98.9%, 98.1%, and 96.4% respectively. The range of accuracy scores, from 89.9% to 98.2%, shows that various CNN models perform differently. ShuffleNet had the lowest accuracy, at 89.9%, while VGG-16 had the best, at 98.2%. Sensitivity estimates range from 89.3% to 99.4% for different models. The ones with the greatest sensitivity levels were VGG-16 and VGG-19. The range of specificity values is 88%–97%. F1 scores range from 91.2% to 98.9% for different models. The range of the Youden Index is 79.8%–96.4%. AUC scores, which represent the overall performance of the model, vary from 90.7% to 98.1%. The robustness and promise of VGG-16 for SVM-based CT image classification is demonstrated by its consistently good performance across a range of criteria. According to the results, the SVM classifier does a good job of identifying CT scan images for common pneumonia, COVID-19, and healthy people when it is applied to features extracted using different CNN architectures.
Table 3.
Performance of the SVM classifier on features extracted with CNN.
| Models | Accuracy (%) | Sensitivity (%) | Specificity (%) | F1 score (%) | Precision (%) | Yonden index (%) | AUC (%) |
|---|---|---|---|---|---|---|---|
| GoogleNet | 96.5 | 97.1 | 95.9 | 97.4 | 97.3 | 93 | 96.4 |
| MobileNetv2 | 97.4 | 97.5 | 97.2 | 97.7 | 98 | 94.7 | 97.3 |
| ResNet-18 | 90.8 | 89.3 | 92.3 | 91.2 | 93.2 | 81.6 | 90.7 |
| ResNet-50 | 95.8 | 95.7 | 95.9 | 96.3 | 96.9 | 91.6 | 95.7 |
| ResNet-101 | 95.9 | 97.1 | 94.7 | 96.9 | 96.8 | 91.8 | 95.8 |
| ShuffleNet | 89.9 | 91.8 | 88 | 92 | 92.2 | 79.8 | 89.7 |
| VGG-16 | 98.2 | 99.4 | 97 | 98.9 | 98.5 | 96.4 | 98.1 |
| VGG-19 | 97.5 | 97.5 | 97.4 | 97.8 | 98.1 | 94.9 | 97.4 |
In order to explore the influence of the combined features on the classifiers in COVID-19 classification, the handcrafted HOG features as well as the automatic deep learning features were concatenated and used in the training of the classifier. The performance metrics of a KNN classifier trained on features consisting of a combination of automatically generated deep learning features and manually created HOG features from several CNN architectures are displayed in Table 4 and Fig. 6. The KNN attained the highest performance accuracy of 86.7% with ResNet-18+ HOG, highest sensitivity of 100% with MobileNetv2+ HOG and VGG-19 + HOG, and specificity of 80.1% respectively. The findings imply that the KNN classifier's performance is affected by the combination of manually created HOG features and automatically generated deep learning features, however the effect differs depending on the CNN model. ResNet-18 + HOG outperformed other models in this context, as evidenced by its highest accuracy, F1 score, precision, Youden Index, and AUC. Both MobileNetv2 + HOG and ResNet-50 + HOG demonstrated strong performance as well, especially in terms of sensitivity, indicating that they may successfully detect positive COVID-19 instances. The complexity added by combining handcrafted and deep learning features may be the cause of the decreased specificity scores in certain scenarios, which point to difficulties in reliably differentiating healthy cases. When integrating several feature types for classification tasks, the findings emphasise the need of proper feature selection and model architecture.
Table 4.
Performance of the KNN classifier on features extracted with CNN + HOG.
| Models | Accuracy (%) | Sensitivity (%) | Specificity (%) | F1 score (%) | Precision (%) | Yonden index (%) | AUC (%) |
|---|---|---|---|---|---|---|---|
| GoogleNet + HOG | 70.3 | 98.8 | 40.3 | 85 | 74.5 | 39.1 | 70.2 |
| MobileNetv2+HOG | 71.5 | 100 | 41.6 | 86 | 75.4 | 41.6 | 70.5 |
| ResNet-18 + HOG | 86.7 | 94.8 | 78.2 | 91.9 | 89.1 | 73 | 85.7 |
| ResNet-50 +HOG | 85.7 | 91.1 | 80.1 | 89.8 | 88.6 | 71.2 | 84.7 |
| ResNet-101+ HOG | 69.5 | 99.4 | 38.5 | 84.7 | 73.8 | 37.9 | 68.5 |
| ShuffleNet + HOG | 70.3 | 98.8 | 40.3 | 84.9 | 74.5 | 39.1 | 69.3 |
| VGG-16 + HOG | 70.3 | 96.3 | 42.9 | 84.1 | 74.7 | 39.2 | 69.3 |
| VGG-19 + HOG | 71.5 | 100 | 41.6 | 86 | 75.4 | 41.6 | 70.5 |
Fig. 6.
CNN + HOG extracted features with KNN classifier performance.
The performance metrics of SVM classifier trained on features consisting of a combination of manually created HOG features and automatically generated deep learning features taken from several CNN architectures are displayed in Table 5. The accuracy levels, which range from 92.6% to 99.4%, are constantly high for each model. ResNet-18 + HOG has the lowest accuracy and VGG-16 + HOG the highest. The high accuracies show that the SVM classifier can effectively classify data when handcrafted and deep learning features are combined. The range of sensitivity levels is 90.1%–100%. Also, values for specificity, vary from 92.6% to 98.7%. The maximum sensitivity and specificity were attained by VGG-16 + HOG. F1 scores, which range from 92.5% to 99.7%, are constantly high. The precision values, which range from 94.4% to 99.5%, are likewise consistently high. The combination of VGG-16 and HOG produced the best F1 and precision scores. AUC values, which represent the overall performance of the model, vary from 91.6% to 99.3%. VGG-16 + HOG has the greatest AUC value. The SVM classifier with the concatenated features has improved the classification as shown in Fig. 7. Depending on the CNN model utilized in Table 4, Table 5, KNN and SVM classifier performance differs. Concatenated features for KNN may perform lower than VGG-16 features for SVM. VGG-16 captures rich information that complements hand-crafted HOG features, which makes data more suited for SVM separation. The results shows that the combination of automated deep learning features with handcrafted HOG features greatly improves the SVM classifier, particularly when utilizing features that are taken from VGG-16. The combined feature set has the ability to provide precise and dependable COVID-19 classification in CT scan images, as indicated by its outstanding performance across several criteria.
Table 5.
Performance of the SVM classifier on features extracted with CNN + HOG.
| Models | Accuracy (%) | Sensitivity (%) | Specificity (%) | F1 score (%) | Precision (%) | Yonden index (%) | AUC (%) |
|---|---|---|---|---|---|---|---|
| GoogleNet + HOG | 97.5 | 97.5 | 97.4 | 97.8 | 98.1 | 94.9 | 96.5 |
| MobileNetv2+HOG | 97.5 | 98.8 | 96.1 | 98.4 | 98 | 94.9 | 96.5 |
| ResNet-18 + HOG | 92.6 | 92.6 | 92.6 | 93.5 | 94.4 | 85.2 | 91.6 |
| ResNet-50 + HOG | 96.2 | 96 | 96.5 | 96.6 | 97.2 | 92.5 | 95.2 |
| ResNet-101 + HOG | 94.3 | 92.6 | 96.1 | 94.2 | 95.9 | 88.7 | 93.3 |
| ShuffleNet + HOG | 93 | 90.1 | 96.1 | 92.5 | 95.1 | 86.2 | 92 |
| VGG-16 + HOG | 99.4 | 100 | 98.7 | 99.7 | 99.5 | 98.7 | 99.3 |
| VGG-19 + HOG | 97.9 | 98.2 | 97.6 | 98.3 | 98.4 | 95.8 | 97.8 |
Fig. 7.
CNN + HOG extracted features with SVM classifier performance.
The success of the suggested technique for the classification of COVID-19, common pneumonia, and CT scan images of healthy patients is demonstrated by the comparison between the proposed concatenated feature model (VGG16 + HOG SVM) and other cutting-edge models shown in Table 6.
Table 6.
Comparison of the best-performed proposed concatenated feature model result with the state-of-the-art model.
| Models | Accuracy (%) | Sensitivity (%) | Specificity (%) | F1 score (%) | Precision (%) | Yonden index (%) | AUC (%) | Images |
|---|---|---|---|---|---|---|---|---|
| VGG19+ LBPKNN [31] | 99.4 | 99.3 | 99.3 | 99.0 | 98.8 | 98.6 | 99.3 | CT Scan |
| CNN-Features Fusion Method [30] | 98.8 | – | – | 98.9 | – | – | – | Chest X-ray |
| CNN-Pyramid MLP fusion [27] | 98.3 | 98.2 | 99.1 | 98.2 | 98.3 | – | – | Chest X-ray |
| P-DenseCOVNet [26] | 93.8 | 97.5 | 90.0 | 94.0 | – | – | – | CT Scan |
| Proposed VGG16+ HOGSVM |
99.4 | 100 | 98.7 | 99.7 | 99.5 | 98.7 | 99.3 | CT Scan |
The proposed model is compared with other comparable methods in the field to assess the effectiveness of the COVID-19 detection. Promising results are shown when the network is compared with different state of the art COVID-19 detection algorithms. It should be noted that the datasets utilized in previous research varied from the ones employed in this work. The accuracy of the proposed model was 99.4%, which is similar to the accuracy of the VGG19 + LBPKNN model, which was also 99.4%. In the comparison, both models performed better than alternative approaches. CNN-Features Fusion Method and CNN-Pyramid MLP Fusion obtained accuracies of 98.8% and 98.3%, respectively, which were higher than P-DenseCOVNet's 93.8%. The proposed method outperformed other models in this regard, with a sensitivity of 100%, suggesting its capacity to accurately detect every positive case. Like the proposed model, the VGG19 + LBPKNN model has a high sensitivity of 99.3%. P-DenseCOVNet achieved 97.5% sensitivity, whereas CNN-Pyramid MLP Fusion demonstrated 98.2% respectively. With an astounding F1 score of 99.7%, the proposed model clearly strikes a good compromise between sensitivity and accuracy. The F1 score of VGG19 + LBPKNN was 99.0%, which was higher than the values of other models. Overall, outstanding results were obtained by the proposed concatenated feature model (VGG16 + HOGSVM) in terms of sensitivity, precision, F1 score, Youden Index, and AUC, demonstrating its efficacy in COVID-19 classification. Using the SVM classifier in conjunction with VGG16 deep learning features and HOG features turns out to be a very successful method for accurately classifying COVID-19 patients from CT scan images.
4. Limitations
Although the proposed method shows outstanding performance, but the lack of access to extensive public datasets from many hospitals is the limitation of the study. Additionally, geographical location and viral mutation can affect the kind of infection. Public databases do not offer extra details like the patients underlying medical issues. Therefore, cooperation between hospitals for private data and publicly available datasets from many domains might enhance the proposed model's functionality even more.
4.1. Performance evaluation
Various matrices were utilized in the evaluation of the performance of our proposed model. These matrices are calculated as shown in equations (2), (3), (4), (5), (6), (7)) below.
A model's accuracy is determined by the ratio of classifications correctly predicted to the total number of predictions performed.
| (2) |
Sensitivity/recall is the rate of true positive which is determined by the ratio of correctly predicted positive values to the total number of positives.
| (3) |
Specificity is the rate of true negative which is determined by the ratio of correctly predicted negative values to the total number of negatives.
| (4) |
Precision calculates the total number of correctly predicted positives to the total number of predicted positives.
| (5) |
F1-score represents the balance between the sensitivity and precision of a model. It is always between 0 and 1 and has been reported to be excellent if it is close to 1 and very bad if otherwise.
| (6) |
Yonden index represents a cut-point which enhances the biomarker's differentiating capability when sensitivity and specificity are given the same weight.
| (7) |
5. Conclusions
The KNN and SVM classifiers were utilized in this research to classify COVID-19, common pneumonia, and healthy individuals' CT scan images. Handcrafted HOG features as well as automatic deep features of eight CNN pre-trained were extracted and used to train the classifiers. In order to enhance the classifiers’ performances, the HOG and the CNN pre-trained models concatenated feature was proposed for the training of the classifiers. The proposed concatenated feature enhanced SVM classifier performance compared to the classifier performance on HOG or CNN features. Though KNN performs well on HOG and CNN features, the performance of the KNN decreases with the concatenated features. Other classifiers like ensemble classifier, random forest, Decision Tree, and bag of tree will be employed to explore the influence of the concatenated features on their performances in the detection of COVID-19.
Data availability
The data supporting the reported results of this study have been cited in this study: [[13], [14], [15], [16]]
Funding statement
This research received no external funding.
Additional information
No additional information is available for this paper.
CRediT authorship contribution statement
Hassana Abubakar: Writing – original draft, Methodology, Conceptualization. Fadi Al-Turjman: Writing – review & editing, Validation, Supervision, Resources. Zubaida S. Ameen: Writing – review & editing, Data curation. Auwalu S. Mubarak: Writing – review & editing, Validation, Resources. Chadi Altrjman: Methodology.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgments
We would like to thank the International Research Center for Artificial Intelligence and Internet of Things, Near East University Cyprus for the computational resources provided for this study.
Contributor Information
Hassana Abubakar, Email: 20213669@std.neu.edu.tr.
Fadi Al-Turjman, Email: fadi.alturjman@neu.edu.tr.
Appendices.
Appendix I.
Confusion Matrix. (a) KNN; (b) SVM performance on features extracted with HOG..
Appendix II.
Confusion Matrix. Performance of the KNN classifier on features extracted with CNN; (a) GoogleNet (b) MobileNetv2 (c) ResNet-18 (d) ResNet-50 (e) ResNet-101 (f) ShuffleNet (g) VGG-16 (h) VGG-19..
Appendix III.
Confusion Matrix. Performance of the SVM classifier on features extracted with CNN; (a) GoogleNet (b) MobileNetv2 (c) ResNet-18 (d) ResNet-50 (e) ResNet-101 (f) ShuffleNet (g) VGG-16 (h) VGG-19..
Appendix IV.
Confusion Matrix. Performance of the KNN classifier on features extracted with CNN + HOG; (a) GoogleNet (b) MobileNetv2 (c) ResNet-18 (d) ResNet-50 (e) ResNet-101 (f) ShuffleNet (g) VGG-16 (h) VGG-19..
Appendix V.
Confusion Matrix. Performance of the SVM classifier on features extracted with CNN + HOG; (a) GoogleNet (b) MobileNetv2 (c) ResNet-18 (d) ResNet-50 (e) ResNet-101 (f) ShuffleNet (g) VGG-16 (h) VGG-19..
References
- 1.Organization WH, others World Health Organization (WHO) coronavirus (COVID-19) Dashboard. World Heal Organ URL. 2022 [Google Scholar]
- 2.Cascella M., Rajnik M., Aleem A., Dulebohn S.C., Di Napoli R. 2022. Features, Evaluation, and Treatment of Coronavirus (COVID-19) Statpearls [internet] [PubMed] [Google Scholar]
- 3.Chavez S., Long B., Koyfman A., Liang S.Y. Coronavirus Disease (COVID-19): a primer for emergency physicians. Am. J. Emerg. Med. 2021 Jun;44:220–229. doi: 10.1016/j.ajem.2020.03.036. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Yao T., Lin H., Mao J., Huo S., Liu G. CT imaging features of patients infected with 2019 novel coronavirus. BIO Integr. 2021 Apr;2(1):5–11. doi: 10.15212/bioi-2020-0038. [DOI] [Google Scholar]
- 5.Lu C.-Y., Bai H.-L., Yuan Y., Lu Q. The value of CT imaging for COVID-19 pneumonia: report of a false-negative nucleic acid test case. J. Thorac. Dis. 2020 May;12(5):2827. doi: 10.21037/jtd.2020.03.62. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Zhai P., Ding Y., Wu X., Long J., Zhong Y., Li Y. The epidemiology, diagnosis and treatment of COVID-19. Int. J. Antimicrob. Agents. 2020 May;55(5) doi: 10.1016/j.ijantimicag.2020.105955. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Udugama B., Kadhiresan P., Kozlowski H.N., Malekjahani A., Osborne M., Li V.Y., Chen H., Mubareka S., Gubbay J.B., Chan W.C. Diagnosing COVID-19: the disease and tools for detection. ACS Nano. 2020 Mar;14(4):3822–3835. doi: 10.1021/acsnano.0c02624. [DOI] [PubMed] [Google Scholar]
- 8.Wang S., Summers R.M. Machine learning and radiology. Med. Image Anal. 2012 Feb;16(5):933–951. doi: 10.1016/j.media.2012.02.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Khan M.A., Ashraf I., Alhaisoni M., Damaševičius R., Scherer R., Rehman A., Bukhari S.A.C. Multimodal brain tumor classification using deep learning and robust feature selection: a machine learning application for radiologists. Diagnostics. 2020 Aug;10(8):565. doi: 10.3390/diagnostics10080565. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Abubakar H., Ameen Z.S., Altrjman C., Alturjman S., Mubarak A.S., Al-Turjman F. Breast Invasive Ductal Casinoma (IDC) detection using AlexNet and ResNet. NEU J Artif Intell Internet Things. 2022 Feb;1(1):48–53. [Google Scholar]
- 11.Ibrahim A.U., Al-Turjman F., Ozsoz M., Serte S. Computer aided detection of tuberculosis using two classifiers. Biomed Eng Tech. 2022 Sep;67(6):513–524. doi: 10.1515/bmt-2021-0310. [DOI] [PubMed] [Google Scholar]
- 12.Abubakar H., Ameen Z.S., Alturjman S., Mubarak A.S., Al-Turjman F. Computational Intelligence in Healthcare. CRC Press; 2023. Detection of diabetic foot ulcer (DFU) with AlexNet and ResNet-101; pp. 181–191. [Google Scholar]
- 13.Yang, X, He, X, Zhao, J, Zhang, Y, Zhang, S, Xie, P. COVID-CT-dataset: a CT image dataset about COVID-19. n.d.; 1–14, arXiv:2003.13865.
- 14.Yan, T, Wong, PK, Ren, H, Wang, H, Wang, J, Li, Y. COVID-19 and common pneumonia chest CT dataset. Mendeley Data. n.d.; Retrieved from https://data.mendeley.com/datasets/3y55vgckg6/1.
- 15.Soares E., Angelov P., Biaso S., Froes M.H., Abe D.K. vols. 1–8. 2020. (SARS-CoV-2 CT-scan Dataset: A Large Dataset of Real Patients CT Scans for SARS-CoV-2 Identification). (medRxiv) [Google Scholar]
- 16.Zhao J., Zhang Y., He X., Xie P. 2020 Jun. Covid-ct-dataset: a Ct Scan Dataset about Covid-19. [Google Scholar]
- 17.Silva C., Bouwmans T., Frélicot C. International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications. VISAPP 2015; 2015 Mar. An extended center-symmetric local binary pattern for background modeling and subtraction in videos. [DOI] [Google Scholar]
- 18.Burger W., Burge M.J. Digital Image Processing: an Algorithmic Introduction. Springer International Publishing; Cham: 2022. Scale-invariant feature transform (SIFT) pp. 709–763. [DOI] [Google Scholar]
- 19.Nanni L., Brahnam S., Ghidoni S., Menegatti E., Barrier T. Different approaches for extracting information from the co-occurrence matrix. PLoS One. 2013 Dec;8(12) doi: 10.1371/journal.pone.0083554. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Litjens G., Kooi T., Bejnordi B.E., Setio A.A.A., Ciompi F., Ghafoorian M., et al. A survey on deep learning in medical image analysis. Med. Image Anal. 2017;42:60–88. doi: 10.1016/j.media.2017.07.005. [DOI] [PubMed] [Google Scholar]
- 21.Rohmah L.N., Bustamam A. 2020 3rd International Conference on Information and Communications Technology. ICOIACT); 2020. Improved classification of coronavirus disease (COVID-19) based on combination of texture features using CT scan and X-ray images; pp. 105–109. [Google Scholar]
- 22.Akram T., Attique M., Gul S., Shahzad A., Altaf M., Naqvi S., Van Der Laak J.A., Van Ginneken B., Sánchez C.I. A novel framework for rapid diagnosis of COVID-19 on computed tomography scans. Pattern Anal. Appl. 2021 Jan;24(3):951–964. doi: 10.1007/s10044-020-00950-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Sayg\il\i A. A new approach for computer-aided detection of coronavirus (COVID-19) from CT and X-ray images using machine learning methods. Appl. Soft Comput. 2021 Jul;105 doi: 10.1016/j.asoc.2021.107323. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Uddin K.M.M., Dey S.K., Babu H.M.H., Mostafiz R., Uddin S., Shoombuatong W., Moni M.A. Feature fusion based VGGFusionNet model to detect COVID-19 patients utilizing computed tomography scan images. Sci. Rep. 2022;12(1) doi: 10.1038/s41598-022-25539-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Aslan N., Koca G.O., Kobat M.A., Dogan S. Multi-classification deep CNN model for diagnosing COVID-19 using iterative neighborhood component analysis and iterative ReliefF feature selection techniques with X-ray images. Chemom. Intell. Lab. Syst. 2022;224 doi: 10.1016/j.chemolab.2022.104539. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Sadik F., Dastider A.G., Subah M.R., Mahmud T., Fattah S.A. A dual-stage deep convolutional neural network for automatic diagnosis of COVID-19 and pneumonia from chest CT images. Comput. Biol. Med. 2022;149(June) doi: 10.1016/j.compbiomed.2022.105806. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Liu Y., Xing W., Zhao M., Lin M., Lin M. A new classification method for diagnosing COVID-19 pneumonia based on joint CNN features of chest X-ray images and parallel pyramid MLP-mixer module. Neural Comput. Appl. 2023;35(23):17187–17199. doi: 10.1007/s00521-023-08604-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Ravi V., Narasimhan H., Chakraborty C., Pham T.D. Deep learning-based meta-classifier approach for COVID-19 classification using CT scan and chest X-ray images. Multimed. Syst. 2022 Jul;28(4):1401–1415. doi: 10.1007/s00530-021-00826-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Erdem K., Kobat M.A., Bilen M.N., Balik Y., Alkan S., Cavlak F., Poyraz A.K., Barua P.D., Tuncer I., Dogan S., others Hybrid-Patch-Alex: a new patch division and deep feature extraction-based image classification model to detect COVID-19, heart failure, and other lung conditions using medical images. Int. J. Imag. Syst. Technol. 2023;33(4):1144–1159. doi: 10.1002/ima.22914. [DOI] [Google Scholar]
- 30.Zhang W., Pogorelsky B., Wolf T. 2020. Classification of Covid-19 X-Ray Images Using a Combination of Deep and Handcrafted Features. [Google Scholar]
- 31.Mubarak A.S., Serte S., Al-Turjman F., Ameen Z.S., Ozsoz M. Local binary pattern and deep learning feature extraction fusion for COVID-19 detection on computed tomography images. Expet Syst. 2022 Sep;39(3) doi: 10.1111/exsy.12842. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Tuncer I., Barua P.D., Dogan S., Baygin M., Tuncer T., Tan R.-S., Yeong C.H., Acharya U.R. Swin-textural: a novel textural features-based image classification model for COVID-19 detection on chest computed tomography. Inform. Med. Unlocked. 2023;36 doi: 10.1016/j.imu.2022.101158. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Loey M., Manogaran G., Khalifa N.E.M. A deep transfer learning model with classical data augmentation and CGAN to detect COVID-19 from chest CT radiography digital images. Neural Comput. Appl. 2020 Oct doi: 10.1007/s00521-020-05437-x. 1–13. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Burkov A. vol. 1. Andriy Burkov Quebec City, QC; Canada: 2019. (The Hundred-Page Machine Learning Book). [Google Scholar]
- 35.Nugroho K.A. 2018 2nd International Conference on Informatics and Computational Sciences (ICICoS) 2018. A comparison of handcrafted and deep neural network feature extraction for classifying optical coherence tomography (OCT) images; pp. 1–6. [DOI] [Google Scholar]
- 36.Srinivasan P.P., Kim L.A., Mettu P.S., Cousins S.W., Comer G.M., Izatt J.A., Farsiu S. Fully automated detection of diabetic macular edema and dry age-related macular degeneration from optical coherence tomography images. Biomed. Opt Express. 2014 Sep;5(10):3568–3577. doi: 10.1364/BOE.5.003568. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Wang H., Hu J., Deng W. Face feature extraction: a complete review. IEEE Access. 2017 Dec;6:6001–6039. doi: 10.1109/ACCESS.2017.2784842. [DOI] [Google Scholar]
- 38.Patel R. 2021 Nov. Predicting Invasive Ductal Carcinoma Using a Reinforcement Sample Learning Strategy Using Deep Learning. arXiv Prepr arXiv210512564. [DOI] [Google Scholar]
- 39.Phil K. Apress; New York: 2017. Matlab Deep Learning with Machine Learning, Neural Networks and Artificial Intelligence. [DOI] [Google Scholar]
- 40.Cheng P.M., Malhi H.S. Transfer learning with convolutional neural networks for classification of abdominal ultrasound images. J. Digit. Imag. 2017 Nov;30(2):234–243. doi: 10.1007/s10278-016-9929-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Ravishankar H., Sudhakar P., Venkataramani R., Thiruvenkadam S., Annangi P., Babu N., Vaidya V. Deep Learning and Data Labeling for Medical Applications. Springer; 2016 Sep. Understanding the mechanisms of deep transfer learning for medical images. 188–96. [DOI] [Google Scholar]
- 42.Yu Y., Lin H., Meng J., Wei X., Guo H., Zhao Z. Deep transfer learning for modality classification of medical images. Information. 2017 Jul;8(3):91. doi: 10.3390/info8030091. [DOI] [Google Scholar]
- 43.Tajbakhsh N., Shin J.Y., Gurudu S.R., Hurst R.T., Kendall C.B., Gotway M.B., Liang J. Convolutional neural networks for medical image analysis: full training or fine tuning? IEEE Trans. Med. Imag. 2016 Mar;35(5):1299–1312. doi: 10.1109/TMI.2016.2535302. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The data supporting the reported results of this study have been cited in this study: [[13], [14], [15], [16]]












