Skip to main content
Elsevier - PMC COVID-19 Collection logoLink to Elsevier - PMC COVID-19 Collection
. 2021 Aug 25;71:103076. doi: 10.1016/j.bspc.2021.103076

COVID-19 disease identification from chest CT images using empirical wavelet transformation and transfer learning

Pramod Gaur a,, Vatsal Malaviya b, Abhay Gupta b, Gautam Bhatia b, Ram Bilas Pachori c, Divyesh Sharma d
PMCID: PMC8384584  PMID: 34457034

Abstract

In the current scenario, novel coronavirus disease (COVID-19) spread is increasing day-by-day. It is very important to control and cure this disease. Reverse transcription-polymerase chain reaction (RT-PCR), chest computerized tomography (CT) imaging options are available as a significantly useful and more truthful tool to classify COVID-19 within the epidemic region. Most of the hospitals have CT imaging machines. It will be fruitful to utilize the chest CT images for early diagnosis and classification of COVID-19 patients. This requires a radiology expert and a good amount of time to classify the chest CT-based COVID-19 images especially when the disease is spreading at a rapid rate. During this pandemic COVID-19, there is a need for an efficient automated way to check for infection. CT is one of the best ways to detect infection inpatients. This paper introduces a new method for preprocessing and classifying COVID-19 positive and negative from CT scan images. The method which is being proposed uses the concept of empirical wavelet transformation for preprocessing, selecting the best components of the red, green, and blue channels of the image are trained on the proposed network. With the proposed methodology, the classification accuracy of 85.5%, F1 score of 85.28%, and AUC of 96.6% are achieved.

Keywords: Empirical Wavelet Transform (EWT), DenseNet, Convolutional Neural Network (CNN), Area Under Curve (AUC)

1. Introduction

Corona viruses are a family of enveloped Ribonucleic acid (RNA) viruses [1] which are distributed widely among mammals and birds that causes principally respiratory or enteric diseases in rare cases neurological illness or hepatitis [2]. Corona virus (COVID-19) which allegedly started to spread from the Wuhan district in China has now been declared as global pandemic by the World Health Organisation (WHO) [3], [4]. To Date (1st June 2020) it has infected over 7961307 people and has been a cause for the death of 434471 people. In total 213 countries and territories have been infected by COVID-19. Although all parts of the world have been affected by this virus the major impact has been seen in countries like USA, Italy, India, China, Spain, Brazil, Russia, United Kingdom, Peru, Iran, Turkey where cases are more than 1,00,000 [5].

It is a highly contagious disease and spreads by droplets via coughing or sneezing [6]. Due to lack of cure or vaccine, it is essential to control the spread of this disease by early detection and self-isolation of the infected patients. The infection can be detected by using tests, based on reverse transcription polymerase chain reaction (RT-PCR) [7]. Unfortunately, the testing is limited due to global shortage of testing kits. Moreover, RT-PCR has high false-negative rates [8] and is dependent on quality of sample collection. There is also a shortage of personnel to collect sample and it is a time-consuming process. Failure to reliably detect new cases not only prevents the patient from receiving appropriate treatment but also risks the spread to healthy subjects.

On the other hand, computed tomography (CT) imaging can reliably detect typical radiographic features in patients with pneumonia caused by COVID-19 [9]. Thus serious complications are noted in patients with symptoms such as difficulty in breathing or shortness of breath [10], [11] that can be life threatening. In patients with severe disease, respiratory failure requires use of ventilator for supportive care. Therefore, diagnosis can be reliably and rapidly made based on radiographic changes in these patients even in the initial RT-PCR negative patients [12].

However, use of CT imaging is limited by the lack of expert radiologists to analyze and report these CT images. The proposed automated analysis of CT images method can be used to detect COVID-19, reducing the time required to analyze the scans and in turn reducing the workload of the doctors. It is also observed that chest X-ray also shows promising results in COVID-19 detection [13] but use of CT imaging turns out to be a better choice than X-ray imaging because of the fact that chest CT imaging produces more detailed view than chest X-ray imaging as it can capture small bones, soft tissues and blood vessels all the same time. Moreover for detailed diagnosis, patient has to rely on chest CT imaging rather than chest X-ray imaging. Additionally, false detection rate and false alarm rate of RT-PCR test is very high. Therefore, an alternate method is required for COVID-19 identification with high sensitivity. The papers from Bernheim et al. [14], Das et al. [15], Nayak et al. [16] and Singh et al. [17] and Sharma et al. [18] also show that patients suffering from COVID-19 showed some visible changes in chest imaging such as bilateral changes, which also highlights the fact that relying only on RT-PCR for COVID-19 detection is not the only option.

In this research paper, proposed method performs an automatic classification of the COVID-19 infected patients using the images of the chest CT. This enables to minimize the workload of already exhausted front-line health professionals who are working day and night to control the situation. This is very relevant in the current scenario of already stretched medical workforce in the developing countries.

Current techniques being proposed which use CT scan, among them major researchers are directly using raw CT scan to the neural network. The paper from Mishra et al. [19] used various DenseNet based convolutional neural network (CNN) models for COVID-19 detection and claimed DenseNet121 to produce the best results. Also paper from Jaiswal et al. [20] used the same approach and claimed DenseNet201 to produce overall good results for COVID-19 detection.

However in the proposed method CNN and empirical wavelet transformation (EWT) are employed on CT scans of normal subjects and COVID-19 affected patients. This paper focuses on the benefits of EWT, combined with proposed network architecture. Before applying EWT on the image, it has been split into different components which are red (R), green (G), and blue (B), and the EWT is applied to analyze which frequency components of which channels are mostly affected due to this virus. The EWT can be used to split CT scan information in image into different frequency sub-bands, For this experiment CT scan is divided into 5 different sub-bands. The data set used for this research consists of 1252 CT scan images of COVID-19 positive patients and 1230 images of CT of patients who were not infected by COVID-19.

This paper proposes a novel approach of COVID-19 detection by applying EWT on each RGB channel of chest CT scan for feature selection and then applying DenseNet121 to select the best component. The use of EWT for feature extraction helps in improving the overall results of this proposed method as compared to using DenseNet121 directly. The paper from Chaudhary [21], [22], shows us that using EWT or slight modifications in EWT for feature extraction has been one of the favorite choice amongst researchers.

The aims of this paper are as follows:

1) To overcome the high false negative value of RT-PCR, chest CT scan images are used in this paper to detect and diagnose of COVID-19 disease.

2) To propose a fusion of advance signal processing EWT technique and deep learning techniques to overcome the limitation of RT-PCR by reducing the high false negative values for classification of COVID-19(+) and COVID-19(-) patients.

3) The proposed model is studied on different sub-bands for R, G, and B region of CT scans using EWT for feature extraction and later using deep learning techniques to get classification from extracted information. EWT has been a very popular method for feature extraction which has shown promising results at times [23].

4) To compare the proposed work with other state-of-the-art methods in terms of various performance metrics such as accuracy, F1-score, and AUC measure.

The rest of the paper is structured as follows: Section II provides details about the dataset used in the research. Section III provides a detailed overview of the proposed method, EWT, CNN architecture, DenseNet architecture, and transfer learning. The results and discussion of the proposed method are represented in section IV. Finally, section V concludes the paper.

2. Data set

For this study, the publicly available SARS-CoV-CT dataset [24] is used which contains 1252 CT scan images of positive SARS-CoV-2 (COVID-19) and 1230 CT scan images of the patients that are not infected by the SARS-COV-2 (COVID-19). The SARS-Cov CT dataset was collected from real patients in hospitals from Sao Paulo, Brazil. The detailed characteristic’s of each patient is skipped by the hospital due to privacy concerns. The dataset is also publicly available on Kaggle at the following link https://www.kaggle.com/plameneduardo/sarscov2-ctscan-dataset.

Fig. 1 depicts the number of patients used for composing this dataset. 60 Covid positive patients were considered of which 32 were males and 28 were females. Similarly 60 Covid negative patients were considered of which 30 were males and 30 were females.

Fig. 1.

Fig. 1

Number of subjects and patients used for composing this dataset.

Fig. 2, show a few sample CT scan images from the used dataset. The first two columns shows COVID-19 positive chest CT scan images of patients whereas the other two columns shows COVID-19 negative chest CT scan images.

Fig. 2.

Fig. 2

The figure shows some sample chest CT scan images of COVID-19 positive and COVID-19 negative patients from the used dataset.

3. Proposed method

In the proposed method, a CNN based approach is applied to EWT on chest CT scan images of both the patients who were suffering and were not suffering from COVID-19. This approach employs data augmentation and transfer learning. Also more standard techniques were used for augmentation of data such as resizing of image to a scale of 256 x 256, random cropping of images from a scale of 0.5 to 1 and also by randomly flipping images horizontally with a uniform probability distribution function. Thus, generating 15 images per image from our data set. Transfer learning is a technique that is used to re-train a CNN model that is designed for other similar tasks. DenseNet121 [25] has been employed for transfer learning in the proposed method. An overview of proposed method can be visualised in Fig. 3 . Initially, the COVID-19 CT scan database is split into training, validation, and testing data randomly with dataset size of 1000, 100, 152 images respectively before data augmentation. Further, the CT scan images are split into R, B, and G channels. EWT filtering is applied to the R, G, and B channels respectively. A different model is trained for each component created by the EWT. Data augmentation [26] is applied to the dataset to prevent the model from overfitting due to the limited number of CT scan images present in the dataset. Finally, the dataset is fed to a modified model of the pre-trained variant of DenseNet121 (ChexNet) [27], this process is repeated for each component.

Fig. 3.

Fig. 3

Overview of the proposed model.

3.1. 2D-Empirical wavelet transform (EWT)

There are various signal decomposition methods which have been studied to decompose physiological signals such as electroencephalogram (EEG), electrocardiogram (ECG), and electromyogram (EMG). Few of them are empirical mode decomposition (EMD) [28], multivariate EMD (MEMD) [29], eigenvalue decomposition (EVD) [30], EWT [31], and 2-D EWT[32]. The EWT method has been studied for cross-term reduction in Wigner-Ville distribution in [33] and automatic detection of coronary artery disease in [34]. The EVD method has been employed for EMG signal analysis [35]. The EMD and MEMD methods find applications in brain computer interface (BCI) to handle the non-stationarity nature of the data [36], [37], [38], [39], [40]. In this paper, 2D-EWT [31], [32] is used for signal decomposition. It is a adaptive method of signal decomposition [31] which is based on information content of the input signal [33]. Unlike Fourier or wavelet transform [41] it does not use pre-defined basis functions. The Fourier spectrum range in EWT is from 0 to π and it segmented in N parts (N-1 intermediate segmentation point). Segment limit is denoted by ωn where starting and ending limits are denoted by ω0=0 and ωN=π. The transition phase is centered around ωn which has a width of 2λωn where 0 <λ< 1. Littlewood-Paley wavelets [42] are used for bandpass filtering with empirical scaling functions δ1(W) which are as follows:

δ1(W)=1if|W|(1-λ)ω1cos[π2f(λ,ω1)]if(1-λ)ω1|W|(1+λ)ω10otherwise (1)

and empirical wavelets ζn(W) is as follows: if n N-1

ζn(W)=1if(1+λ)ωn|W|(1-λ)ωn+1cos[π2f(λ,ωn+1)]if(1-λ)ωn+1|W|(1+λ)ωn+1sin[π2f(λ,ωn)]if(1-λ)ωn|W|(1+λ)ωn0otherwise (2)

if n = N-1

ζN-1(W)=1if(1+λ)ωN-1|W|sin[π2f(λ,ωN-1)]if(1-λ)ωN-1|W|(1+λ)ωN-10otherwise (2)

where f( λ,ωn ) and f( λ,ωn+1 ) can be represented as:

f(λ,ωn)=f(12λωn(|W|-(1-λ)ωn)),and
f(λ,ωn+1)=f(12λωn+1(|W|-(1-λ)ωn+1))

Also f(z) satisfies the following condition:

f(z)=0ifz01ifz1f(z)+f(1-z)=1z[0,1]

Steps of Littlewood-Paley EWT algorithm for images [43] decomposition is as follow:

  • 1.
    Compute the 1D Pseudo-polar fast Fourier transform (PPFFT) of image I. The 2D spectrum of PPFFT is respresented as P(Θ, W) and take average with respect to Θ:
    X|W|=1NΘr=0NΘ-1P(Θr,|W|) (1)
  • 2.

    Boundaries are detected on X|W|using scale-space approach and corresponding filter bank B=δ1,{ζn}n=1N-1 is designed.

  • 3.

    Then we filter I along the rows (i.e. along Θ) using B which in turn gives N subband images.

3.2. Convolution Neural Network (CNN)

The CNN [44] is one of the most advanced neural networks used nowadays which has a lot of applications in the field of computer vision [45]. It consists of multi-layered neural networks which makes it really powerful when inputs are images because it is capable of achieving some kinds of shifts and also deformation invariance which makes it really capable of handling images. It consists of a number of layers namely, the input layer which is the first layer and passes raw data to further connected layers for processing. Then comes the convolution layer which is also called the learning layer, it performs the dot product between the filter and image having size the same as the filter. Rectified linear unit (ReLU) is a threshold layer the applies max(0, x) as its activation function [46]. For reducing the spatial dimension of the received data a max-pooling layer or avg-pooling layer is often used [47]. Another layer that is used is called a fully connected layer in which neurons of fully connected to the neurons of the just previous layer. A normalized exponential function is used in the softmax layer which normalizes the data between 0 to 1. Finally comes the output layer which provides the output of the CNN along with the label and loss function.

3.3. DenseNet

CNN can be efficient to train, substantially deeper, and more accurate, if they contain shorter connections between layers close to the input and those close to the output. In dense convolutional network (DenseNet) [25], there is a connection with each layer to every other layer in a feed-forward fashion. Input for each layer is the feature-maps of all the previous layers, and current layer’s feature-map is used as input to all the next layers. DenseNet helps to avoid and reduce the impact of the vanishing gradient problem, as well as strengthen feature propagation, encourage feature reuse, and the number of parameters are reduced substantially.

More Details about different type of DenseNet architecture – DenseNet121, DenseNet169, DenseNet201 [25]. Proposed method uses DenseNet to keep model simpler also easier to compute due to fewer parameters than rest of the models. Also, DenseNet architectures have been observed to achieve faster convergence then the rest.

3.4. Transfer learning

The entire DenseNet model trained from scratch requires a collection of large labeled dataset and powerful hardware resources. Due to hardware limitations and small size of the dataset, it becomes infeasible to train the network from scratch so to overcome this transfer learning [48] technique is used in this work for better results. In transfer learning the knowledge gained from solving a problem is used and transferred to solve a similar problem. In this a pre-trained model is used which is already trained for other similar tasks. In this technique, by changing a certain parameter a pre-trained DenseNet model is trained on new data for similar classification tasks. The layer weights are adjusted according to the new training data. With the proposed method, a pre-trained DenseNet121 model is employed which is trained in a new data using the transfer learning. A new head is applied to pretrained weights of DenseNet121 in proposed methods consisting of a dense layer and sigmoid activation, most suitable for binary classification.

4. Results and discussion

In this paper, a COVID-19 disease classification model is proposed to classify whether a person is infected from COVID-19 or not, using the CT scan of a person’s chest. Feature extraction is an important aspect for deep learning. Past studies such as [49], [50] have shown the significance of frequency oriented data. EWT technique extracts data from CT-scan according to the frequency sub-bands. Wavelet transformation is used in previous study such as [51] for classification using brain CT-scans and principal component analysis(PCA) and K-nearest neighbours (KNN), basic machine learning techniques. A novel process is proposed combing EWT for feature extraction and deep learning techniques such as transfer learning for classification and significant improvement in performance has been observed in this study over past studies. Initially, images are split into R, G and B channels, and further EWT is applied on the image [43] of each channel and 5 images (each containing information of different frequency bands) are obtained for each channel, making a total of 15 images for 1 CT scan. After splitting the dataset into training and testing, transfer learning on DenseNet121 is applied and the results are obtained. This study also tries to learn which sub-band of frequency fetches the most helpful information from the image for classification of COVID-19.

Results from the proposed model are represented through precision-recall curve [52] shown in Fig. 4, Fig. 5, Fig. 6 for R, G and B channels respectively. The bigger the area covered, better is the performance.

Fig. 4.

Fig. 4

Precision-recall curve of red channels.

Fig. 5.

Fig. 5

Precision-recall curve of blue channels.

Fig. 6.

Fig. 6

Precision-recall curve of green channels.

Fig. 4, shows the results from R channel, 5th sub-band which represents the highest frequency band, It has shown the highest area under curve (AUC) of 0.9443 followed by the 1st sub-band with a score of 0.8196 which represents lowest frequency sub-band and after that mid ranging frequency sub-bands lies in the order of 2nd,3rd and 4th with scores of 0.7197, 0.5728, 0.57. The model performance of DenseNet121 on 5th component Red channel with transfer learning has got even higher to 0.966 with help of fine-tuning deep learning techniques.

Similarly Fig. 5, shows the results from B channel where the highest frequency band has shown the highest AUC of 0.9278 followed by the lowest frequency band with the score of 0.8448 and Fig. 6 which shows the results from G channel has shown the highest AUC of 0.9271 followed by 0.7956 shown by the highest frequency and the lowest frequency sub-bands respectively.

Different DenseNet variations were tried while experimentation. From Table 1 , different performance measure can be observed with respect to different model variants of DenseNet, as the complexity increases in the table there is a chance to improve the performance, all the performance metrics shows the improvement in results, and with help of transfer learning using ChexNet [27] pre-trained weights on DenseNet121, a margin of 10% improvement is observed.

Table 1.

Comparison among different DenseNet variants and effects of Transfer Learning on Accuracy, F1 Score and AUC.

Model Accuracy (%) F1-score (%) AUC score (%)
DeseNet121 74.50 65.70 91.68
DenseNet169 81.00 76.50 93.90
DenseNet201 82.00 78.50 95.10
DenseNet121 with TL 85.50 85.28 96.60

From the graphs observed in Figs. Fig. 4, Fig. 5, Fig. 6, it can be noted that 5th component of the red channel gives the best results. Training accuracy for the proposed method is 95.2%, F1 score is 95.16%, and AUC is 97.71%. While, testing accuracy for the proposed method is 85.5%, the F1 score is 85.28% and AUC is 96.6%. And also we have obtained precision, specificity, and sensitivity for the training data set up to 96.5%,96.6% and 95.3% respectively and precision, specificity, and sensitivity for testing data set up to 98%, 99%, and 87% respectively.

Table 2 shows that the DenseNet169 out performs DenseNet169 variant from [53] and the proposed method (on DenseNet121 with TL) outperforms more so when compared to already existing models. Also the results show that the best outcome is achieved for the high-frequency components of each channel i.e 5th component of R gives the best accuracy followed by 5th component of B channel and 5th component of G channel. Therefore, the proposed method has better performance with simpler structure than current counter-parts. The proposed method shows an almost 2.2% increase in accuracy from the previously existing models proposed by other research groups. F1 score and AUC score are also better in the proposed method. During the fine-tuning of model hyper-parameters also played a major role. For performance improvement different learning rates were used among which 1e-3 showed the best results, with greater learning rate the model seemed to get faster convergence but with the cost of performance i.e. under-fitting, while in lower learning rate at 1e-5 slower convergence but better training performance at the cost of over-fitting, i.e. worsened validation results.

Table 2.

Comparison among existing models and the proposed model on Accuracy, F1 Score and AUC.

Model Accuracy (%) F1-score (%) AUC Score (%)
DenseNet-169 [53] 79.50 76.00 90.10
ResNet-50 [53] 77.40 74.60 86.40
DenseNet169 81.00 76.50 93.90
DenseNet121 with TL 85.50 85.28 96.60

5. Conclusion

In this work, we have built a method to tackle the situation of testing and identifying COVID-19 patients. The results of our proposed method show that with the help of EWT better feature extraction has been possible to get improved performance from the same deep learning techniques. Also there has been one more conclusion from the study that high-frequency components from the CT scan have more useful information therefore improved predictability for COVID-19, and amongst them, results of the R channel are more prominent than B and G channels. The proposed method reduces the pressure from the radiologists by minimizing their work as well as cross-verifying the results from the testing kits. This proposed work will make a better way to handle the situation and help countries do more tests than possible to date. This study will inspire work in different fields having a significance of CT scans or other frequency oriented data for usage of this novel method for better performance.

CRediT authorship contribution statement

Pramod Gaur: Resources, Formal analysis, Software, Data curation, Writing - original draft, Conceptualization, Methodology, Software. Vatsal Malaviya: Writing - original draft, Writing - review & editing. Abhay Gupta: Writing - original draft, Writing - review & editing. Gautam Bhatia: Writing - original draft, Writing - review & editing. Ram Bilas Pachori: Supervision, Conceptualization, Methodology, Validation, Visualization, Writing - review & editing. Divyesh Sharma: Supervision, Conceptualization, Methodology, Validation, Visualization, Writing - review & editing.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  • 1.Gao Y., et al. Structure of the RNA-dependent RNA polymerase from COVID-19 virus. Science. 2020;368(6492):779–782. doi: 10.1126/science.abb7498. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.P.S. Masters, The molecular biology of coronaviruses, Vol. 66 of Advances in Virus Research, Academic Press, 2006, pp. 193–292. [DOI] [PMC free article] [PubMed]
  • 3.Lee A. Wuhan novel coronavirus (covid-19): why global control is challenging? Public Health. 2020;179:A1. doi: 10.1016/j.puhe.2020.02.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.A. Tavakoli, K. Vahdat, M. a. Keshavarz, Novel coronavirus disease 2019 (covid-19): An emerging infectious disease in the 21st century, Iranian South Medical Journal 22 (6) (2020).
  • 5.W.H. Organization, et al., Coronavirus disease 2019 (covid-19): situation report, 142 (2020).
  • 6.J. Chin, et al., Control of communicable diseases manual (2000).
  • 7.T. Ai, et al., Correlation of Chest CT and RT-PCR Testing in Coronavirus Disease 2019 (COVID-19) in China: A Report of 1014 Cases, Radiology 0 (0) 200642. [DOI] [PMC free article] [PubMed]
  • 8.A. Narin, C. Kaya, Z. Pamuk, Automatic Detection of Coronavirus Disease (COVID-19) Using X-ray Images and Deep Convolutional Neural Networks (2020). arXiv:2003.10849. [DOI] [PMC free article] [PubMed]
  • 9.X. Xu, et al., Deep Learning System to Screen Coronavirus Disease 2019 Pneumonia, arXiv (2020) arXiv–2002.
  • 10.Gautret, et al. Lack of nasal carriage of novel corona virus (HCoV-EMC) in French Hajj pilgrims returning from the Hajj 2012, despite a high rate of respiratory symptoms. Clinical Microbiology and Infection. 2013;19(7):E315–E317. doi: 10.1111/1469-0691.12174. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.N. Sharma, P. Krishnan, R. Kumar, S. Ramoji, S.R. Chetupalli, P.K. Ghosh, S. Ganapathy, et al., Coswara–a database of breathing, cough, and voice sounds for covid-19 diagnosis, arXiv preprint arXiv:2005.10548 (2020).
  • 12.Wang D., et al. Clinical characteristics of 138 hospitalized patients with 2019 novel coronavirus–infected pneumonia in wuhan, china. JAMA. 2020;323(11):1061–1069. doi: 10.1001/jama.2020.1585. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Gianchandani N., Jaiswal A., Singh D., Kumar V., Kaur M. Rapid covid-19 diagnosis using ensemble deep transfer learning models from chest radiographic images. Journal of Ambient Intelligence and Humanized Computing. 2020:1–13. doi: 10.1007/s12652-020-02669-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Bernheim A., Mei X., Huang M., Yang Y., Fayad Z.A., Zhang N., Diao K., Lin B., Zhu X., Li K., et al. Chest ct findings in coronavirus disease-19 (covid-19): relationship to duration of infection. Radiology. 2020;200463 doi: 10.1148/radiol.2020200463. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Das N.N., Kumar N., Kaur M., Kumar V., Singh D. Automated deep transfer learning-based approach for detection of covid-19 infection in chest x-rays. Irbm. 2020 doi: 10.1016/j.irbm.2020.07.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Nayak S.R., Nayak D.R., Sinha U., Arora V., Pachori R.B. Application of deep learning techniques for detection of COVID-19 cases using chest X-ray images: A comprehensive study. Biomedical Signal Processing and Control. 2021;64 doi: 10.1016/j.bspc.2020.102365. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Singh D., Kumar V., Yadav V., Kaur M. Deep neural network-based screening model for covid-19-infected patients using chest x-ray images. International Journal of Pattern Recognition and Artificial Intelligence. 2020;2151004 [Google Scholar]
  • 18.Sharma R.R., Kumar M., Maheshwari S., Ray K.P. Evdhm-arima-based time series forecasting model and its application for covid-19 cases. IEEE Transactions on Instrumentation and Measurement. 2020;70:1–10. doi: 10.1109/TIM.2020.3041833. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Mishra A.K., Das S.K., Roy P., Bandyopadhyay S. Identifying covid19 from chest ct images: a deep convolutional neural networks based approach. Journal of Healthcare Engineering. 2020;2020 doi: 10.1155/2020/8843664. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Jaiswal A., Gianchandani N., Singh D., Kumar V., Kaur M. Classification of the covid-19 infected patients using densenet201 based deep transfer learning. Journal of Biomolecular Structure and Dynamics. 2020:1–8. doi: 10.1080/07391102.2020.1788642. [DOI] [PubMed] [Google Scholar]
  • 21.P.K. Chaudhary, R.B. Pachori, Automatic diagnosis of glaucoma using two-dimensional fourier-bessel series expansion based empirical wavelet transform, Biomedical Signal Processing and Control 64 102237.
  • 22.Chaudhary P.K., Pachori R.B. 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), IEEE. 2020. Automatic diagnosis of covid-19 and pneumonia using fbd method; pp. 2257–2263. [Google Scholar]
  • 23.Maheshwari S., Pachori R.B., Acharya U.R. Automated diagnosis of glaucoma using empirical wavelet transform and correntropy features extracted from fundus images. IEEE Journal of Biomedical and Health Informatics. 2016;21(3):803–813. doi: 10.1109/JBHI.2016.2544961. [DOI] [PubMed] [Google Scholar]
  • 24.E. Soares, P. Angelov, S. Biaso, M. Higa Froes, D. Kanda Abe, SARS-CoV-2 CT-scan dataset: A large dataset of real patients CT scans for SARS-CoV-2 identification, medRxiv (2020).
  • 25.Huang G., Liu Z., Van Der Maaten L., Weinberger K.Q. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017. Densely connected convolutional networks; pp. 4700–4708. [Google Scholar]
  • 26.Van Dyk D.A., Meng X.-L. The art of data augmentation. Journal of Computational and Graphical Statistics. 2001;10(1):1–50. [Google Scholar]
  • 27.P. Rajpurkar, et al., Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning, arXiv preprint arXiv:1711.05225 (2017).
  • 28.N.E. Huang, Z. Shen, S.R. Long, M.C. Wu, H.H. Shih, Q. Zheng, N.-C. Yen, C.C. Tung, H.H. Liu, The empirical mode decomposition and the hilbert spectrum for nonlinear and non-stationary time series analysis, Proceedings of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences 454 (1971) (1998) 903–995.
  • 29.Rehman N., Mandic D.P. Multivariate empirical mode decomposition. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences. 2010;466(2117):1291–1302. [Google Scholar]
  • 30.Sharma R.R., Pachori R.B. Time–frequency representation using ievdhm–ht with application to classification of epileptic eeg signals. IET Science, Measurement & Technology. 2017;12(1):72–82. [Google Scholar]
  • 31.Gilles J. Empirical wavelet transform. IEEE Transactions on Signal Processing. 2013;61(16):3999–4010. [Google Scholar]
  • 32.Gilles J., Tran G., Osher S. 2d empirical transforms. wavelets, ridgelets, and curvelets revisited. SIAM Journal on Imaging Sciences. 2014;7(1):157–186. [Google Scholar]
  • 33.Sharma R.R., Kalyani A., Pachori R.B. An empirical wavelet transform-based approach for cross-terms-free wigner–ville distribution. Signal, Image and Video Processing. 2020;14(2):249–256. [Google Scholar]
  • 34.Sharma R.R., Kumar M., Pachori R.B. Joint time-frequency domain-based cad disease sensing system using ecg signals. IEEE Sensors Journal. 2019;19(10):3912–3920. [Google Scholar]
  • 35.Sharma R.R., Chandra P., Pachori R.B. Machine Intelligence and Signal Analysis. Springer; 2019. Electromyogram signal analysis using eigenvalue decomposition of the hankel matrix; pp. 671–682. [Google Scholar]
  • 36.Gaur P., Gupta H., Chowdhury A., McCreadie K., Pachori R.B., Wang H. A sliding window common spatial pattern for enhancing motor imagery classification in eeg-bci. IEEE Transactions on Instrumentation and Measurement. 2021;70:1–9. [Google Scholar]
  • 37.Gaur P., Pachori R.B., Wang H., Prasad G. A multi-class EEG-based BCI classification using multivariate empirical mode decomposition based filtering and Riemannian geometry. Expert Systems with Applications. 2018;95:201–211. [Google Scholar]
  • 38.Gaur P., Pachori R.B., Wang H., Prasad G. An Automatic Subject Specific Intrinsic Mode Function Selection for Enhancing Two-Class EEG-Based Motor Imagery-Brain Computer Interface. IEEE Sensors Journal. 2019;19(16):6938–6947. [Google Scholar]
  • 39.Gaur P., McCreadie K., Pachori R.B., Wang H., Prasad G. Tangent space features-based transfer learning classification model for two-class motor imagery brain–computer interface. International Journal of Neural Systems. 2019;29(10):1950025. doi: 10.1142/S0129065719500254. [DOI] [PubMed] [Google Scholar]
  • 40.Gaur P., Pachori R.B., Wang H., Prasad G. 27th Irish Signals and Systems Conference (ISSC), IEEE. 2016. A multivariate empirical mode decomposition based filtering for subject independent BCI; pp. 1–7. [Google Scholar]
  • 41.R. Rao, Wavelet transforms, Encyclopedia of Imaging Science and Technology (2002).
  • 42.Chui C.K., Shi X. Inequalities of Littlewood-Paley type for frames and wavelets. SIAM Journal on Mathematical Analysis. 1993;24(1):263–277. [Google Scholar]
  • 43.Prabhakar T.N., Geetha P. Two-dimensional empirical wavelet transform based supervised hyperspectral image classification. ISPRS Journal of Photogrammetry and Remote Sensing. 2017;133:37–45. [Google Scholar]
  • 44.LeCun Y., Bengio Y., et al. Convolutional networks for images, speech, and time series. The Handbook of Brain Theory and Neural Networks. 1995;3361(10):1995. [Google Scholar]
  • 45.Chandan G., Jain A., Jain H., et al. 2018 International Conference on Inventive Research in Computing Applications (ICIRCA), IEEE. 2018. Real time object detection and tracking using Deep Learning and OpenCV; pp. 1305–1308. [Google Scholar]
  • 46.A. Krizhevsky, I. Sutskever, G.E. Hinton, Imagenet classification with deep convolutional neural networks, in: Advances in Neural Information Processing Systems, 2012, pp. 1097–1105.
  • 47.Lawrence S., Giles C.L., Tsoi Ah Chung, Back A.D. Face recognition: a convolutional neural-network approach. IEEE Transactions on Neural Networks. 1997;8(1):98–113. doi: 10.1109/72.554195. [DOI] [PubMed] [Google Scholar]
  • 48.Pan S.J., Yang Q. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering. 2010;22(10):1345–1359. [Google Scholar]
  • 49.Henschke C.I., Yankelevitz D.F., Mirtcheva R., McGuinness G., McCauley D., Miettinen O.S. CT screening for lung cancer: frequency and significance of part-solid and nonsolid nodules. American Journal of Roentgenology. 2002 doi: 10.2214/ajr.178.5.1781053. [DOI] [PubMed] [Google Scholar]
  • 50.B. Gross, G. Glazer, M. Orringer, D. Spizarny, A. Flint, Bronchogenic carcinoma metastatic to normal-sized lymph nodes: frequency and significance. (1988). [DOI] [PubMed]
  • 51.K. Kaur, D. Singh, Brain ct-scan images classification using pca, wavelet transform and k-nn (2012).
  • 52.Boyd K., Eng K.H., Page C.D. In: Machine Learning and Knowledge Discovery in Databases. Blockeel H., Kersting K., Nijssen S., Železný F., editors. Springer; Berlin Heidelberg: 2013. Area under the precision-recall curve: point estimates and confidence intervals. [Google Scholar]
  • 53.X. Yang, X. He, J. Zhao, Y. Zhang, S. Zhang, P. Xie, COVID-CT-Dataset: A CT Scan Dataset about COVID-19, arXiv (2020) arXiv–2003.

Articles from Biomedical Signal Processing and Control are provided here courtesy of Elsevier

RESOURCES