Table 3.
Quantitative comparison of shallow and deep learning models. Accuracy (Acc.), dice similarity coefficient (DSC), jaccard index (JI), and time are reported for comparison. If the time is in second(s), then it is the inference time; otherwise, it is the model training time. SGD is the abbreviation for stochastic gradient descent, ReLu for rectified linear unit, lr for learning rate, and AF for activation function
| Methods | Optimizer | AF | LR scheduling | Images size | Pre-processing step | Dataset | Technique | Acc. | DSC | JI | Time | 
|---|---|---|---|---|---|---|---|---|---|---|---|
| SL [150] | – | – | – | 440 440 | Image resizing | JSRT | Gaussian kernel distance matrix, FCM | 97.8 | – | – | – | 
| SL [78] | – | – | – | 2048 2048, 4020 4892 | No pre-processing is performed | JSRT, MC, CXR-14 | FCM, Level set algorithm | – | 97.6 | 95.6 | 25–30(s) | 
| SL [109] | – | – | – | – | No pre-processing is performed | Private | Linear discriminant, kNN, Neural Network, gray level thresholding | 76.0 | – | – | – | 
| SL [173] | – | – | – | 1024 1024 | Images are resized using the bilinear interpolation | Private | Markov random field classifier, Iterated conditional modes | 94.8 | – | – | – | 
| DL [66] | SGD | – | lr = 0.1 and it decreased to 0.01 after training 70 epochs | 256 256 | Image resizing | JSRT; MC | Residual learning, atrous convolution layers, network wise training | – | 98.0 | 96.1 | – | 
| DL [121] | Adam | ELU, Sigmoid | lr = 0.00001 with = 0.9 and = 0.999 | 128 128, 256 256 | Image resizing | JSRT | UNet, ELU, Highly restrictive regularization | – | 97.4 | 95.0 | 33.0(hr) | 
| DL [99] | Adam | ReLu | lr = 0.001 | 256 256 | Image resizing and data augmentation by affine transformations | JSRT | UNet, cross-validation | – | 98.0 | 97.0 | – | 
| DL [155] | SGD | ReLu, Softmax | lr = 0.01 | 512 512, 128 128 | Image resizing and scaling | MC | AlexNet, ResNet-18, Patch classification, Reconstruction of lungs | 96.9 | 94.0 | 88.07 | – | 
| DL [81] | SGD | ReLu, Softmax | – | 2048 2048 | Histogram equalization and local contrast normalization is applied | JSRT | Modified SegNet | 96.2 | 95.0 | – | 3.0(hr) | 
| DL [143] | – | ReLu, Softmax | – | 2048 2048 | No pre-processing is performed | JSRT | SegNet | – | 95.9 | – | – | 
| DL [111] | Adam | ReLu, Softmax | lr = 0.0001 with = 0.9 and = 0.999 | 224 224 | Image resizing and data augmentation by flipping, rotating, and cropping | JSRT, MC | Modified SegNet (Lf-SegNet) | 98.73 | – | 95.10 | – | 
| DL [164] | SGD | – | lr = 0.02 with poly learning rate policy | 512 512 | Image resizing and data augmentation by image to image translation | JSRT, MC, NIH | ResNet-101, dilated convolution, CCAM, MUNIT | – | 97.6 | – | – | 
| DL [85] | SGD | ReLu, Sigmoid | lr = 0.01 with decrease in by factor 10 when validation accuracy is not improved | 512 512 | Image resizing | JSRT, MC, Shenzhen | ResNet-101, UNet, self-attention modules | – | 97.2 | – | 1.4(s) |