Skip to main content
. 2020 Jan 17;9(1):248. doi: 10.3390/jcm9010248

Table 1.

Recently published AI approaches being undertaken to support clinical decision-making processes in pneumonia.

Reference Kermany et al. [13]
Cell, 2018
Stephen et al. [14]
Journal of Healthcare Engineering, 2019
Heckerling et al. [15]
Clinical Applications, 2003
Hwang et al. [16]
JAMA Network Open, 2019
Main Goal Detect pneumonia and distinguish viral and bacterial etiology To handle pneumonia classification Predict the presence of pneumonia among patients with acute respiratory complaints Make a deep learning–based algorithm for major thoracic diseases;
Comparison with physicians and external validation
Applied Method Neural network Neural network and augmentation methods to artificially increase the size and quality of the dataset Neural networks Deep learning—neural networks
5232 chest X-ray for training phase and 624 images for test phase 5856 X-ray images—3722 training set and 2134 to the validation set--- 1023 patients–training cohort of 907 and a testing cohort of 116 54,221 X-ray with normal finding—41140 with abnormal findings
Results Detect pneumonia = accuracy of 92.8%
Distinguish viral vs bacterial = accuracy of 90.7%
Training accuracy = 0.9531 validation accuracy of 0.9373 Training cohort = sensitivity of 0.842 specificity of 0.593 testing cohort = sensitivity of 0.829 specificity of 0.547 Image-wise classification: in-house = AUROC of 0.965 and external validation = AUROC of 0.979
Lesion-wise localization: in-house = AUAFROC of 0.916 and external validation = AUAFROC of 0.972
-Comparison with physician: DLAD = AUROC 0.983 was higher versus 3 observer groups (p < 0.005)

Abbreviations: AUROC: area under the receiver operating characteristic curve; AUAFROC: area under the alternative free-response receiver operating characteristic curve; DLAD: Deep learning–based automatic detection algorithms.