Table 6.
Quantitative comparison of pneumonia and pneumothorax detection methods. Dice similarity coefficient (DSC), F1-score, and the area under curve (AUC) are reported for comparison. SGD is the abbreviation for stochastic gradient descent, ReLu for rectified linear unit, lr for learning rate, and AF for activation function
Method | Optimizer | AF | LR Scheduling | Images size | Pre-processing step | Dataset | Technique | DSC | F1-Score | AUC |
---|---|---|---|---|---|---|---|---|---|---|
DL [135] | Adam | Sigmoid | lr = 0.001 that is decreased by factor of 10 when validation loss is not improved | 224 224 | Image normalization | CXR-14 | DenseNet-121, transfer learning | – | 43.5 | – |
DL [77] | SGD | – | lr = 0.00105 | 512 512 | Random scaling, shift in coordinate space, brightness and contrast adjustment, blurring with Gaussian blur | RSNA | ResNet50, ResNet101, mask-RCNN, data augmentation | – | – | – |
DL [133] | Gradient Descent | ReLu, Softmax | lr = 0.0003 | 224 224, 227 227 | Image resize and augmentation | Kaggle [112] | AlexNet, ResNet-18, DenseNet-201, SqueezeNet, transfer learning, data augmentation, cross-validation | – | 93.5 | 95.0 |
DL [43] | Adam | – | lr = 0.00001 with learning rate decrease factor of 0.2 | 512 512 | Image resizing and data augmentation techniques such as scaling, shear and rotation | CXR-14 | single-shot detector RetinaNet with Se-ResNext101, cross-validation | – | – | – |
DL [37] | – | ReLu, Softmax | lr = 0.00001 | 227 227 | Image resizing | CXR-14 | VGG-19, CWT, DWT, GLCM, transfer learning, SVM-linear, SVM-RBF, KNN classifier, RF, DT | – | 92.15 | – |
DL [158] | Adam | ReLu, Sigmoid | lr = 0.001 with = 0.9 and = 0.999 | 224 224 | Image normalization, resizing, cropping and data augmentation | CheXpert | DenseNet-122, transfer learning | – | – | 70.8 |
DL [176] | Adam | ReLu, Softmax | lr = 0.0005 with = 0.9, = 0.999 | 768 768, 1024 1024 | Image normalization and data augmentation with random Gamma correction, random brightness and contrast change, CLAHE, motion blur, median blur, horizontal flip, random shift, random scale, and random rotation | SIIM-ACR, MC | UNet, SE-Resnext-101, EfficientNet-B3, transfer learning | 88.0 | – | – |
DL [1] | Adam | ReLu, Sigmoid | lr = 0.001 which is relatively dropped per epoch using the cosine annealing learning rate technique | 256 256, 512 512 | Image resizing, normalization and data augmentation using horizontal flip, one of random contrast, random gamma, and random brightness, one of elastic transform, grid distortion, and optical distortion | SIIM-ACR | UNet, ResNet-34, transfer learning, stochastic weight averaging | 83.56 | – | – |